While searching for benefits of JNDI, I came across many answers where it was mentioned that it helps to switch between different environments without changing the application. But why is JNDI used for DNS/LDAP/EJB access? Is it for the same reason or are there additional benefits when it comes to those technolgoies?
Because it's one API instead of three, and you left out RMI and CosNAMING which makes five.
This is how I think about JNDI; it's an interface (actually more than one) that offers several directory services (e.g. to discover and to lookup an object by name).
Like any interface, you can have different implementations (LDAP, DNS, etc) and you can to use what is more suitable for solving your problem.
The design benefits are the ones related to programming against the interface(if you change the implementation the client code that use the API don't needs to change)
Related
I'm a Java EE nooby developer, According to many resources on the internet which claim that service locator design pattern is an anti-pattern because it hides classes dependecies and more things and should be avoided as many as possbile and using Dependecy Injection instead, as we know JNDI is an implemantation of service locator pattern.
I googled to check that JNDI is an implementation of service locator and i found this response which claims this : Understanding JNDI
Althought i see that JNDI is used in Java EE application for many purposes (Datasources, EJB lookup ...), So should i use it or should i avoid it as more as possible?, if JNDI isn't bad then service locator isn't?
I think that the one part of your question, whether service locator is good or not or whether JNDI is about this pattern is a bit esoteric. I can give a general advice here as being a software architect for some years now, that a pattern by itself is not good and not bad, it is just a piece of solution that was successfully used before in many cases and thus be declared a pattern in order to be used for future cases which are similar. And another thing is, as opposed to many years ago, when one had to know the GoF book by heart in order to survive an interview, nowadays it is much more important to understand the underlying concepts of a framework like Java EE than to implement all those patterns, because what you have to implement is very often very simple and straightforward, but using them relies on those concepts.
Concerning the second part of your question, you are almost never in need of directly using JNDI, but to use concepts built on top of it, as injection - that is what you should use in your application.
It's a horrible pattern IMHO since it is a massive security flaw. If dependencies are known at compile time and do not change, then its much easier to audit, gate and control possible vulenrabilities. Even within an organization JDNI is a Trojan horse waiting to be put to nefarious use, if a bad actor ca compromise some other area and your network, then get load whatever they want via a poorly/unwittingly implemented app. This log4j debacle is proof of that: don't allow apps to look-up and load whatever, whenever. It's a stupid idea. It's unsafe.
In a business environment we end up needing different kinds of data across applications so that it makes sense to store them in a shared location. For instance you may have a set of applications that share the same set of users, and we need authorization information for each of them listing what roles they have so we can know what they need to access. That kind of thing goes into an LDAP data store, you can think of it as a hierarchical database optimized for fast read access.
All sorts of things can go in these datastores, it's normal for an application server to stash connection pools in them, for instance. A lot of these, like users, roles, and connection pools, are vital things you need to do your job.
JNDI is the standard Java API for accessing these LDAP datastores.
The nasty thing about the service locator design pattern is that the client code doing the lookup has to know too much about the thing it is querying (mainly, where to get it from), and having that lookup hard-coded in the client makes the code inflexible and hard to test. But if we use dependency injection (whether it's CDI, Spring, whatever) we can have the framework inject the value we want into the code, while the JNDI lookups are handled within the framework code and not in the application. That means you can use JNDI without your application code having to use the service locator pattern.
There is a business problem that needs to be solved. The obvious solution is an enterprise web application - a locally hosted website that provides the desired functionality.
I want to build this web application, but build it such that -
Its more of a product than a one-time solution; such that it can be customized for different clients
It is possible to provide 'fixes' for this web application, so that bugs can be removed and enhancements added with minimum impact on operations
The web app should be capable of working with different databases and existing authentication systems
Is this even possible? Is it a common enough approach that there is a known way of going about this? Would it be better to use an application framework like Spring or try and keep dependencies on frameworks to minimal?
Also, any links or references to books that will guide me will be greatly appreciated.
Thanks in advance StackOverflow!
(I feel like I dont know all what I need to know before embarking on this project, please feel free to point out things I haven't and should consider)
Developing software, esp. for re-usability, requires analyzing which parts/functions are common between use cases and which aren't, drawing the line between re-usable (library) and customized/specialized code.
If you know what use cases you expect or want to support in the future this can be feasible.
If you don't, you should not start trying to generalize arbitrary functionality in the first place, because you cannot know what you will be needing in the future.
Java provides some good abstractions of various functionalities, like universal DB support via JDBC.
If you didn't already, have a look at application servers like JBoss or Glassfish. They provide plenty of basic functionality for web applications, support very loose coupling between components, and are highly configurable. To switch from one DBMS to another, for instance, it is enough to alter a single line of configuration (given the supported SQL is similar enough). Deploying applications or parts can often be done on the fly ("hot deployment") without even stopping the server.
Plus: There is a vast amout of supporting libraries and frameworks out there to help you standardize your application design.
I have been working for a while on a webapp that can be deployed in multiple locations: it is designed to be instantiated on many hosts. It's entirely possible to do this, but it is difficult. Writing the code so that it can work this way takes a great deal of care.
The key to doing it is to make all your dependencies on things explicit and all your configuration driven by properties that can be set during installation. Spring makes this quite a lot easier! In particular, the org.springframework.web.context.support.ServletContextPropertyPlaceholderConfigurer class allows you to use the servlet context as a source of values that you can then inject into your beans (e.g., via #Value annotations). It's far harder to do all that yourself. Here's (a simplified version of) what I use:
<bean class="org.springframework.web.context.support.ServletContextPropertyPlaceholderConfigurer">
<property name="contextOverride" value="true" />
<property name="location" value="/WEB-INF/default.properties" />
</bean>
This merges the servlet context's properties on top of the ones you provide as defaults inside your webapp (definitely a good practice if most things aren't going to need to be modified most of the time) and then uses them to define properties. I then apply a configuration property (e.g., foo.bar) to a bean property using a placeholder, like this:
#Value("${foo.bar}")
public void setFoobar(String foobar) { ... }
Things to configure that way include the database configuration, absolute locations of files holding things that can't be packaged inside the webapp, etc. You'll have to use your skill and knowledge of the application domain to work out what things need to be listed.
Other key principles are to keep as much as possible inside the webapp (so reducing the opportunity for the deployer to mess it up), to be very careful about documenting everything, and to try it with multiple servlet containers. Remember, the person deploying your webapp does not have access to the contents of your thoughts: you have to write it down and tell them exactly what to do. (Too many instructions are at the level of “click this, click that, magic happens” but those are poor instructions since the exact method will vary over time: saying why will help far more because its more portable.)
We are currently developing a product that can be deployed internally for multiple clients and also as a public portal solution. Here is our experience.
As others have pointed out, there are different factors to keep in mind.
Security
Security that is associated with your product, and how you would manage the product functional requirements to external security roles.
Security, authentication and authorization should not be as part of the base product. Once authorized the roles need to be mapped to product roles for achieving said functionality.
Images and logos, that require customization.
Internationalization.
For working with multiple databases, assuming a product has typically two different views, persistence and querying. Our experience was to use hibernate to support multiple databases, but theoretically we have used only two databases in the past. db2 and mysql.
Testing for multiple databases for every release of your product is a pain. Your test cases goes 3 fold or atleast once in a while to support multiple databases.
Using custom databases and functions are a big no, you can use some general functions but custom database specific functions in your query are going to be a pain and have to be very diligent to avoid them.
Supported browsers in your product.
Licenses of the third party jars may not be compatible / acceptable to all institutions so you have to watch out for that carefully.
As much as possible, enable properties or configuration to customize all variables.
Caching strategy and properties initialization strategies.
A framework helps the team to keep on the same page, rather than an internal framework. There are many advantages to use a well established framework like Spring for performance and other consideration.
Cheers!
I am to interview a developer for a team lead role. Can you suggest few good questions for the following topics:
Spring 2.x or 3.x
EJB
J2EE
Java Multithreading
Thanks.
It doesn't add up to ask questions that are not relevant to your projects. So first of all figure out what technologies and frameworks are used. Then you can ask questions in next areas:
Java Core (Object#methods(); String#intern(); Checked & Unchecked exceptions and when should you use them; Memory Leaks)
Collections API (ArrayList vs. LinkedList; how HashMap works and what's the difference betwixt HashMap, Hastable & ConcurrentHashMap; what is a ConcurrentModificationException; what concurrent collections do you know)
Databases:
General (prepared statements; mapping class hierarchies to the relational DB; types of locks; transaction isolation)
ORM, let's say we're talking about Hibernate (Levels of cache; examples of HQL; problems with mapping concrete collections such as LinkedList; caveats implementing equals())
Concurrency (atomic operations; volatile; Executors; BlockingQueue; detecting deadlocks in applications)
MOM (in what situations it's better than SOAP; ask for some EIP)
Spring IoC (how to define an ArrayList in XML; bean scopes)
XML (namespaces; SAX vs. DOM; XML Catalogs; XPath expressions)
OO:
OOD (LSP, SRP, OCP, DRY, ISP; give some example to the interviewed guy to solve some OO-problem)
Design Patterns (all 3 types of Factories, Lazy Singleton with a proper synchronization, Command vs. Strategy)
Algorithms and structures (trees, heaps, lists; soring, iterating, etc.)
Testing (what types exist; TDD; testing DAO layer; some puzzle to test)
Build tools, e.g. Maven (dependencyManagement; profiles; resource filtering; deploying artifacts/applications)
CI (why do we need it; what problems it solves)
Dev process (Agile/Scrum, RUP)
Work in team, team management skills (there might be plenty of questions, I'm too lazy to give examples :))
Take a look at this question. It's pretty much the same question as yours.
One that immediately sprung to my mind regarding EJB - if you want to see if they really have some experience with EJB ask them
"When EJB 3 was introduced - tell us about the troubles you had deploying your first EJB3 applications on different Application Server implementations with regard to differences of the actual implementations and the official specs."
We had endless problems where JBoss and Oracle Application Server etc. (and even GlassFish) did not behave the way they were supposed to. The worst part was mixing EJB 2.1 with EJB 3...
Another one about Java EE - let them explain to you what Java EE actually means to them - there are a lot of misconceptions about this.
When should an object (i.e. an application-wide properties file) be kept in the session, as opposed to creating a singleton to keep it? When should each of these approaches be used?
Note: I am working on a clustered environment, if that makes any difference.
If it's supposed to be application-wide, then you should not store it in the session scope, but in the application scope. With storing in the session scope, you're unnecessarily duplicating the same data for every visitor. A singleton is also not needed at all, just instantiate once during server startup with help of a ServletContextListener and store it in the application scope using ServletContext#setAttribute().
+1 to BalusC, but I suspect that was just a typo on your part.
As for singletons, it depends on what you mean by singleton. If you have an EJB annotated with #Singleton, then that's fine (other dependency-injection providers may also support this pattern).
If you're talking about the standard singleton pattern, where you keep the instance in a static variable, then that's a bad idea. You should generally avoid static variables in Java EE or servlet containers, because the class loading can be a bit tricky - you may wind up with multiple copies when you don't expect it, or you may be sharing a single copy between different applications, or you may be keeping stuff in memory when you redeploy your application. You can make an exception in cases where the variable isn't exposed outside the class, and you don't really care how many copies of it you have (for example, logger objects).
Note: I am working on a clustered environment, if that makes any difference.
I don't disagree with what Mike and BalusC have already written, but I feel you're entering territory where implementation details matter. What you do and how you do it will depend on the back-end services, what sort of clustering, and what the application requirements are. I think the question is too broad to give specific answers.
Furthermore...
All Java EE profiles share a set of common features, such as naming and resource injection, packaging rules, security requirements, etc. This guarantees a degree of uniformity across all products, and indirectly applications, that fall under the “Java EE platform” umbrella. This also ensures that developers who are familiar with a certain profile, or with the full platform, can move easily to other profiles, avoiding excessive compartmentalization of skills and experience.
Java EE specifications define a certain level of compliance but the goal isn't to make every infrastructure heterogeneous. This sort of thing adds complexity to an already nebulous problem domain.
Currently, I only know a way of doing RPC for POJOs in Java, and is with the very complex EJB/JBoss solution.
Is there any better way of providing a similar functionality with a thiner layer (within or without a Java EE container), using RMI or something that can serialize and send full blown objects over the wire?
I'm not currently interested in HTTP/JSON serialization BTW.
EDIT: For clarification: I'm trying to replace an old EJB 2.1/JBoss 4 solution with something more easy to manage at the container level. I need to have entire control over the database(planning to use iBATIS which would allow me to use fairly complex SQL very easily), but the only things I want to keep over the wire are:
Invocation of lookup/data modification methods (automagic serialization goes here).
Transparent session control (authentication/authorization). I still have to see how to accomplish this.
Both items have to work as a whole, of course. No access should be granted to users without credentials.
Because I'm not very fond of writing webapps, I plan to build a GUI (Swing or SWT) that would only manage POJOs, do some reporting and invoke methods from the container. I want the serialization to be as easy as possible.
As is nearly always the case, Spring comes to the rescue. From the reference documentation, you will want to read Chapter 17. Remoting and web services using Spring.
There are several methods to choose from. The beauty of Spring is that all your interfaces and implementations are vanilla POJOs. The wiring into RMI or whatever is handled by Spring. You can:
Export services using RMI:
probably the simplest approach;
Use HTTP invoker: if remote access is an issue, this might be better for firewalls, etc than pure RMI; or
Use Web Services, in which case I would favour JAX-WS over JAX-RPC.
Spring has the additional benefit in that it can do the wiring for both the server and the client, easily and transparently.
Personally I would choose either (2) or (3). HTTP is network friendly. It's easy to deploy in a Web container. Jetty's long-lived connections give you the option over server push (effectively) over HTTP.
All of these methods allow complex objects to be sent across the wire but they are subtly different in this regard. You need to consider if your server and client are going to be distributed separately and whether it's an issue if you change the interface that you need to redistribute the class files. Or you can use a customized serialization solution (even XML) to avoid this. But that has issues as well.
Using a Web container will allow you to easily plug-in Spring Security, which can be a bit daunting at first just because there are so many options. Also, HttpSession can be used to provide state information between requests.
Simple RPC is exactly what RMI was built for. If you make a serializable interface, you can call methods on one app from another app.
If you only need value objects then just ensure the POJOs implement Serializable and write the objects across sockets (using ObjectOutputStream). On the receiving end read the objects using ObjectInputStream. The receiving end has to have a compatible version of the POJO (see serialVersionUID).
Hessian/Burlap 'protocol-ize this: http://hessian.caucho.com/ and http://www.caucho.com/resin-3.0/protocols/burlap.xtp
You could try XStream (http://x-stream.github.io/) over REST. Easy to apply on e pre-existing set of pojos.
Can you give some further information as to what you're trying to achieve, since you're not interested in rest/json ?