Is it right to synchronize a method when building an API [closed] - java

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
When building an API RESTful webservice which performs CRUD operations, is it a right practice to synchronize the methods? Or should we avoid synchronizing the methods since it reduces the performance.
Could anyone please explain.

It is not "right practice" or "best practice".
It doesn't even make sense to me. In general.
A RESTful API consists of a URLs for making http / https "calls". The notion of "synchronized" is foreign to this.
Now a there might be use-cases where it would make sense to use some form of mutual exclusion in the implementation of a RESTful API. However, it is not clear that declaring Java API methods as synchronized gives the correct semantics. Certainly not without knowing how your RESTful API is being mapped onto your Java methods and your domain objects.
If we are talking about mapped Java methods in a Spring #RestController class, declaring those methods as synchronized would result in mutual exclusion on the current instance of the controller class. Spring controller objects are singletons, so you would end up processing your RESTful requests serially. I can't see why you would want to do that.
The correct way to approach this is to work out what you are actually trying to achieve, and then look for ways to achieve it. Don't go looking for a solution before you understand the problem. And don't look for "best practice" justifications. Think things through ... in the context of your problem.
There are no best practices.

Related

How to design an API wrapper with bulky operations on domain object? (Need guidance) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need some guidance in designing an API wrapper for my backend APIs. I have tried to keep it as specific as possible.
Context: We have a project which supports certain file operations like edit, create, merge etc. All the services are exposed as rest APIs. Now I have to create an API wrapper over this (client library) in Java. I've been reading about DDD and trying to approach the problem using that.
As per my thinking, the core object in my project would be File, along with some minor DTOs for talking to the backend. Edit, create, merge will be the verbs here acting on my domain object. I want to make it as easy as possible for the external developer to integrate the API. I would like the design to be something like that
For Creating a file : File.create() For editing : File.edit() Same for other operations Also, I want to have the capability of chaining operations (along the lines of fluent interfaces) for readability
For. eg. if you want to create a file and then convert it, it should be something like : File.create().convert(Required params)
My problem is each of the operation is bulky and async. I don't wanna write all the async error handling logic in the File class. Chaining the methods like above wont be easy as well if they return CompletableFuture objects, and this will be harder to maintain.
Question: What is a better way of solving this problem?
I am not looking for a spoonfed design. I just want to be guided to a design approach which fits the scenario. Feel free to point if I am understanding DDD wrong.
Very roughly: your domain model is responsible for bookkeeping. The effects on the state of the filesystem are implemented in your infrastructure layer, and you send the results back to the bookkeeper.

Why isn't java.io.Bits public? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've done a lot with IO in Java and after looking for code to convert primitives to byte arrays and back I found source for java.io.Bits on one of the Java source code hosting websites. After a quick glance I realized it's exactly what I need, except it's package-private. So I made a copy which I made public, stored in my project's package and used (only in personal projects, I assure you). I find it very useful.
My question is, why is this package-private? I can see it being really useful for people who work with IO and I see no disadvantage from changing it's visibility to public (in rt.jar). Or is there perhaps an equivalent (and please don't mention other libraries)?
Here's a link to a randomly chosen website that has Java source for java.io.Bits: http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/io/Bits.java
You'd have to ask one of the Java devs for sure, but by making it package private, the API can be treated as "internal" - i.e. it might change or disappear at any time. This means the API can be developed relatively quickly, and doesn't need to go through the same relatively thorough testing process that public APIs need to go through (since once they're released, they're stuck there for good.)
In short, making an API public has long term implications, and it requires much, much more work than just hitting a switch.
I would hazard a guess it started life as a "hacked together" group of functions useful for a few other classes in the IO package, and has just stayed there ever since.
It's package-private, sure, but there are public APIs that expose the same behavior, e.g. ByteBuffer.wrap(array).getInt(index) and the other methods on ByteBuffer. You're almost certainly better off using that properly designed, well-documented public API than trying to wrap or copy internal implementation details from Java.

Huge remote interface -> best practices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I just had this interesting discussion with a colleague of mine.
We have a remote interface which is 2000+ lines of code and has 100+ methods in it.
The implementation of this interface has some logic but also delegated to other interfaces that are related to a certain concern.
I argued we should also split up the remote interface based on concern.
Advantages:
- Separation of concerns
- Just create different endpoint for each interface, client developers should only use interfaces they are interested in
- No "monster interface"
- Eg. security possible per-endpoint
He opposes this, arguing:
- One remote interface is easy for the client developers
I'm wondering what the "general opinion" on this is?
Is it a good practice to create a remote facade which groups all your concerns in one endpoint?
If you think in terms of scalability and the fact that behind that interface you could end up having multiple services (at least in a SOA architecture) it is for sure worth splitting into multiple interfaces.
And let us see why:
Indeed as you pointed out you will have separation of concerns
You will have a light weight API - developers will end up using only features they need
It is easier to implement versioning - it might not be the case now but can be the case in the future
Readability and maintainability on your side - smaller interfaces make it easier to debug code and understand what each concern is supposed to do
Indeed security/authorization/authentication per endpoint or per remote interface is much easier to implement
I am sure that this list can continue but these are just a few I can think of now
And to give you one last argument - "the bigger they are the harder they fall"

In Java design is composition not used much anymore? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
A Java developer (with lots of experience in sophisticated, high-performance environments) very recently commented that "composition is not used much anymore." I was surprised by this comment. Is this true?
On the one hand, other answers on this forum indicate that difference between composition and aggregation can be ambiguous (can the whole exist without the part; does the part exist throughout the life of the containing object?). But perhaps in all of these cases the question stands--how to add behavior to an existing class or class hierarchy.
The context of his comment was a discussion of possible alternatives to inheritance. If this developer is correct, what has replaced composition in working practice? Mix-ins through added interfaces?
Any perspectives are welcome!
If anything, it's probably used now more than ever thanks to dependency injection frameworks like Spring. The model that all of the Java developers I know use is to build classes that relate to one another in functionality more by interface and purpose and to use Spring to inject them according to a particular configuration (ex the ability to replace entire security frameworks just by changing a spring configuration file and adding a few new JAR files).

Best way to develop service layer in java [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I wants to develop service layer for my application using java. At the same time the service layer can also be exposed to webservice also.
My idea is to create one Generic Abstract Class for database operations , and all other service classes extend that abstract class and do the DB operation through that abstract class methods.
Is this a good idea to do it?
Please help me
It's hard to say with so few details, and without even knowing what you'll use to access the database (JDBC? JPA? Hibernate?). But
the service layer and the persistence layer are not the same thing. To ease decoupling and testability, I prefer having a pure service layer and a data access layer
inheritance is generally not the best way to reuse code. Use a well-design API, and prefer delegation over inheritance.
Also, don't reinvent the wheel. EJB3, Spring and other frameworks have good support to develop services and expose them as web services.
You should consider using some framework, which will help you with routine. E.g. Spring or Java EE. Those frameworks can offer you many built-in solutions like IoC, declarative transactions, declarative security etc.

Categories

Resources