I am starting to learn about EJB, despite I know they handle the business logic, I don't understand why an EJB has to implement an interface.
I know that the interface is a list of the methods and is used by the client to access them, but what if I don't use an interface?
I know that the no-interface view exist but when should I use an interface then?
could you please explain it using a no IT example? I am taking a course about Java EE 7 and I am stuck in this part, I have read the Oracle tutorial but I've got problems understanding this.
I apologize for my wording mistakes.
thanks in advance
The reason for an interface is because you need to invoke a method in one JVM that transparently can invoke your EJB in another JVM. All the complexity of Java EE come from that it is designed to work across multiple JVMs.
This can be handled in many ways. The approach chosen here is that an interface can make this almost transparent in your code (just compare with invocation through reflection) but the object in the first JVM contains not your code, but instead code that knows how to reach the other JVM and ask it to invoke your method and return the result.
In other words, an interface allows the compiler to help you doing it right in your code, and the application server then provides the magic glue in the object called to reach your EJB.
An EJB doesn't have to implement an interface (anymore). Well, this is only true, if you haven't different VMs accessing the same EJB container. You could host your JBoss in the cloud (e.g.) and have another JEE-Server (e.g. Tomcat) at your company site (or anywhere else) and let the TomEE retrieve its EJB instances from the JBoss Application Server. Then you have to use an interface to program against, since you don't know what the implementation will be.
Since EJB 3 no-interface views are possible. You're free to use interfaces, but you shall be happy if you don't... as long as you dont need distributed EJB-services.
An interface is always a great choice, when you design a big system though. You can easily change you underlying logic if you program against the interface and not the concrete class. So if specifications change, code is much easier to maintain.
Related
In my Spring project I have many simple services to fetching data (just a simple CRUD). The design of the developers that started this project was to create the implementation for each of the service like
public interface UserService
and then implementation like
public class UserServiceImpl implements UserService
Since there is no chance that UserService will have more implementation I'm really sick of these Impl suffix and the more I read (e.g. this article) I'm realising that I have reasons to being sick
I had a discussion with a friend from a team last week and I shared my thoughts with him but what he answered was 'basically you're right but Spring likes interfaces and works with them better than with classes'.
Unfortunately I'm not an expert in Spring and, however I was trying to look for some arguments, I was not able to find an answer was he right.
Are there some strong arguments to use such approach in Spring to have interface for every little service class?
I can tell from real world projects, that it works well without interfaces only having the implementing class. Following the principle "You aren't gonna need it" (YAGNI), you simplify your code if you follow that rule. Dependency Injection works also well with classes, interfaces are not a requirement for it.
Sure you can write and reuse test implementations, but you can do the same with mocks e.g. with mockito and overwrite the behavior of your implementation class for test cases.
I've gone through all the Answers here, But would like to add more on proxy
AOP can use JDK proxy OR CGlib proxy
If Class has implemented interface it will use JDK proxy(preferred whenever you have a choice).
If Class has not implemented interface it will use CGlib proxy.
Wherever you want to reap benefits of dependecy injection (DI) pattern you need to program against abstractions, usually an interface.
There are more benefits to DI, but the most persuasive seems to be it allows unit testing. There your interfaces will get at least one more implementation (the mock implementantion), when you will want to test your class in isolation from its dependencies (those production implementations of the interfaces).
That said, that doesn't mean every class must implement some interface. Some parts of code can be tightly coupled together without problem.
Note that using Spring or not doesn't play role in the use DI/not use DI decision.
It isn't a must and maybe opinion based, but you are adding interface to enable future flexibility of service,
Although you don't see real usage, it will allow you to use a different implementation of specific services inside unit/integration test
You can add test implementation instead of current implementation and use it instead of real service when executing test (for example by using different Spring profile)
This can be done using mocks as #Simulant points out
Acutally not needed , currently , micro service or mini code base is popular.
So normally , in rest api backend , you really do not have chance to have serveral implemention for certain interface .
In this situation , concrete class with #Serivice is enough.
As others have suggested it really depends upon the use cases. Although Spring and Java in general started as a verbose language with design where interfaces are suppose to act as what client, the implementation classes, can see but I am finding less and less verbose code these days esp. with Spring Boot and libraries like lombok these days.
So, it is not mandatory to create interfaces for Service, DAO but it is preferred if you are working on a fairly medium code base where there are multiple developers and possibly clients consuming those APIs outside of the application as well. But if you are working for a small or proof of concept projects, you can create a CRUD application on one Java class as well.
I am working on an API for a software so my users can extend it without modifying the source code. But, I want only certain functions to be accessed by certain classes for security reasons. Is there anyway to do this? Also, I have no code because I have no idea on how to do this.
Thanks! -Trent
I have two thoughts on this, one is that you can look at how Minecraft Forge created their plugin API.
Another way is to have a limited API between your core code and the actual plugins, but, you need to be careful of the platform. For example, if you write the core application in Java or C#, then I can use Aspect Oriented Programming (AOP) to bypass your security and have my code change the behavior of yours.
If you use functional programming (FP) languages, then you can protect more from this type of approach, if you also are not using languages on these platforms, but they are not perfect.
So, there is a trade-off between power and convenience, so how useful do you want your application to be, and how secure?
One possible solution that may work is if you go with something similar to Minecraft, though I doubt they do this, but, give a stub application to the user. They can extend it with plugins, and the interface functions they can modify are in the stub. When the program starts, the plugins are loaded, and the interface may be modified or extended, but, then the core program is pulled down and put into the stub, and then the actual program runs. The core program can be recompiled and manipulated so method names are changed, so reflection is harder to use, but taking this approach, and doing it well, would be hard.
BTW, I like Alex T's response, I just gave different terms to some of his, such as AOP instead of reflection and immutability is part of FP.
You mention jar, which means you are using something that runs on a JVM, so you may want to read up on AspectJ, as it can significantly alter the behavior of applications. You can have private methods, but I can put code that runs instead of yours, or change the parameters or the return value before or after the method is called.
To protect variables inside of classes, you can make them private, and accessible via getter and setter methods with varying levels of protection. This also applies to classes themselves; if you wanted to prevent the user from being able to instantiate a class, you could mark the class' constructor as protected to allow instantiation only within it's package.
If you wanted to hide the implementation details of a class altogether, you could declare the class as class X instead of public class X, which would hide methods from the API for standard development.
This will quickly get you the behaviour you're after, but there's an aspect of Java called reflection, which allows an executable Java program to analyze and manipulate it's own implementation; in this regard, no field or method is ever completely safe.
You can also safeguard variables by providing access to them via 'immutable' Objects; these are objects designed to forbid the caller from modifying the original source contents.
In a java project i need to call (lets say generic, utility) web services. For instance giving a city code as parameter and getting details about that city. Web services are already implemented and i can only consume them.
I had the same situation before in another project and created a class for that kind of webservices. That class had several web service call methods and all were static methods.
Now, i dont really want to do same thing again because i dont think thats the right way to do it (hard to debug etc). I also dont want to make a different class for all these methods and make an instance for each call because they are too generic and instantiation seems like an overhead for that situation.
So, alternatives comes to my mind is
Using old method. One static class, several methods.
Singleton class. Most probably will have syncronisation problems, so will have overhead using locking mechanisms.
Both are not the best solutions what would you suggest?
Thanks in advance.
When using Management Beans in Java, its interface is exposed through MBean Interface. But if there are various parameters to be exposed through MBean. And with different version of the system, many new parameters might be added or subtracted from the MBean, then it becomes very tedious to manage such system.
Is there any design pattern that can be used to avoid such problems ?
If you want things to happen dynamically, you've got to have some logical rule to dynamically determine which fields \ methods of the managed class should be exposed and which should not.
Now, you might be able to implement a Dynamic MBean (see a great explanatory example here) and use reflection to gather up-to-date information of the managed class. The reflected class info should then be filtered against the previously mentioned rule (hoping all other programmers follow it! I wouldn't count on it).
OK, so this isn't a design pattern. I think the real recommended pattern is that the programmer that adds certain property should take the few moments reasoning whether it is worth exposing and whether it is safe to be exposed. And when removing a property, one should think whether it doesn't break any exsiting client code out there.
Yes, use interfaces. Make sure that the provider of the MBean as well as the consumer uses the smae Java interface.
For the provider part, have a look at how spring can assemble an MBean from an interface.
On the consumer side, it is not very difficult to write an MBean client that takes an interface and translates that into MBean access opertaions.
I'm working on modifying an existing application implemented as a 2.1 stateless EJB. I'd like to put in some kind of generic, detailed, logging of all calls made to the EJB.
Stuff I'd like to log:
Name of method being called
Serialized copy of all passed parameters
Serialized copy of return value
I implemented something like that for an asp.net REST web service before by simply putting in a hook before the request is processed and one right before the response is sent back. It produces a lot of data, but it's well worth it for debugging a long running system.
I'm not sure how the same can be done for an EJB. I'd like to avoid AOP since the application doesn't currently use AOP. Interceptors won't work because it's not EJB 3.0.
Does anyone know of a way to hook into the EJB processing pipeline to look at request as they come in? Is there another approach to doing this?
Thanks
I think there are only two ways two ways to know when a method of an EJB (or any other class) is called:
Bad solution: using the Java Debug Interface (JDI) you can know which line is executed, as you know it when you are debugging Java with your IDE. It's complicated and there are some problems when you are debugging an application in the same JVM where JDI runs.
Good solution: as Thomas Owens says, AOP is the recommended solution. If you are not using it in your project now, this is a good reason for using it.