I have a web based enterprise application. One problem we are facing is that the backend is tightly coupled with our front end. Now we want to come up with mobile apps plus a desktop client for our software. For this we decided to move our backend to web services so that multiple front end clients can use the same function calls. But still wondering if this is a right move? or any other approach could help? Any pitfalls of using webservices? One thing that concerns me most is the speed. Are webservices inherently slower?
Appreciate your help.
Most web services (SOAP, REST) are used to create platform-independent services. What I'm trying to say is that the Web Service can be accessed by applications written in java, .NET platform, etc. without worrying about interoperability between the language use and technology.
Also, most have multiple applications that needs to access the same data, so writing a data abstraction layer for each application is not OO.
In your case, Web Services will be necessary as you will have plenty presentation layers (Desktop, Mobile and Web application) that will access your system through a network protocol. In this case, I would suggest a web service as you can easily write the business logic/rules inside the service and the client will just do a request/response process to get/post data).
It depends on the type of information your passing back and forth but in the broader scheme of things a SOA is the way to go if you looking to have one 'back end' system. I'm sure you'll be happy with the performance and even in some cases surprised. It most cases the database calls are still the bottle necks. I've lead the charge of implementing a SOA on a lot of projects. In a lot of cases the applications have preformed even faster with web services, not because the calls are that much faster but because it will lead you to better design and to take advantage of other technologies like local and remote caching.
I would say there are generally two pitfalls of using web services as the back end.
1.) How they are implemented. Read some books and do some testing before you start writing the real thing. Develop standards and try to stick to them.
2.) Single point of access is a good thing, but not in a unstable or quickly changing environment. Have some redundancy and a backup plan as well as a sandbox and staging area.
Depends on what you do understand with 'webservices'. SOAP, REST, all technologies like this?
SOAP services have the big advantage, that they have a well defined contract through the WSDL as well as the clients can easily generate stubs. On the other hand, SOAP services can also bring a lot more work (e.g. if using them on a client which has no soap client (e.g. iOS, plain html app)). Additionally the webservice stuff brings a lot of overhead, which could play a role if you intend to deliver large data e.g. to mobile devices.
There you must take into consideration that the clients could have limited bandwith (speed and data volume). Furthermore it must process the hole XML document, whereas a json could be more easier for example.
It does not degrade perfomance(speed).Because websevice is soap over http.I think ,In your case The backhand should be exposed as webservice,It would be feasible to invoke the service by multiple client.And more over you can introduce the extra security also if you need....
Related
I have an application where both the backend and the frontend are built in Java. The backend provides some functionalities like accessing the DB, etc. While the frontend built in Struts calls those functions.
I'm looking for a way to make any Java class easily callable on TCP, ideally in my mind this could be done by extending a specific class, let's say:
public class MyClass extends ThisIsAnAPI
making in this way all the public functions callable on a network protocol.
With such a framework the frontend could be easily implemented in other languages, like Ruby (On Rails), by making network requests to the backend APIs written in Java and exposed on TCP.
Any tips?
If you are likely to go to a JavaScript/Ajax UI then I would take the time to expose the backend as RESTful services. Using JAX/RS this is a matter of a few lines of a code and some annotations and an interface.
If you are staying pure Java, it's pretty trivial these days to turn a POJO into a remotely callable EJB: just a couple of annotations.
It may sound like overkill, but in terms of effort and cost (given a free app server such as WebSphere CE or JBoss) it's not that big a deal. However if you don't go for EJBs then you need to look at two big issues:
Security. You've got some TCP-callable services. How sensitive are those services? Do they need authentication and authorisation? You can all too easily open up sensitive databases to the whole company or even the internet.
Resilience and Scaling. How will you manage failure scenarios? EJBs exposed via RMI/IIOP can be clusterd and hence you can scale and deal with errors. If you start with a technology capable of doing that, even if you don't need the functionality right now, you are well placed for the future.
I would start with RMI which is designed to do this. You create an interface which the client uses and the server implements.
Try Hessian, which is a low-level TCP protocol also having bindings for several other platforms, so you will get C#/C++/Flash/... for free. I think it is a bit easier to work with compared to RMI.
If you need more portability for the future, consider exposing POJOs via SOAP/REST (most WS stacks have this ability, only few extra annotations are needed if any).
You might want to take a look at JMS. It's quite high level and easy to use, but you need to run a message broker. It's a bit of a different architecture to point-to-point communication.
As several persons have mentioned RMI you can look up spring which have support for this and I have myself used successfully. http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html
I have to re-design an existing application which uses Pylons (Python) on the backend and GWT on the frontend.
In the course of this re-design I can also change the backend system.
I tried to read up on the advantages and disadvantages of various backend systems (Java, Python, etc) but I would be thankful for some feedback from the community.
Existing application:
The existing application was developed with GWT 1.5 (runs now on 2.1) and is a multi-host-page setup.
The Pylons MVC framework defines a set of controllers/host pages in which GWT widgets are embedded ("classical website").
Data is stored in a MySQL database and accessed by the backend with SQLAlchemy/Elixir. Server/client communication is done with RequestBuilder (JSON).
The application is not a typical business like application with complex CRUD functionality (transactions, locking, etc) or sophisticated permission system (tough a simple ACL is required).
The application is used for visualization (charts, tables) of scientific data. The client interface is primarily used to display data in read-only mode. There might be some CRUD functionality but it's not the main aspect of the app.
Only a subset of the scientific data is going to be transfered to the client interface but this subset is generated out of large datasets.
The existing backend uses numpy/scipy to read data from db/files, create matrices and filter them.
The numbers of users accessing or using the app is relatively small, but the burden on the backend for each user/request is pretty high because it has to read and filter large datasets.
Requirements for the new system:
I want to move away from the multi-host-page setup to the MVP architecture (one single host page).
So the backend only serves one host page and acts as data source for AJAX calls.
Data will be still stored in a relational database (PostgreSQL instead of MySQL).
There will be a simple ACL (defines who can see what kind of data) and maybe some CRUD functionality (but it's not a priority).
The size of the datasets is going to increase, so the burden on the backend is probably going to be higher. There won't be many concurrent requests but the few ones have to be handled by the backend quickly. Hardware (RAM and CPU) for the backend server is not an issue.
Possible backend solutions:
Python (SQLAlchemy, Pylons or Django):
Advantages:
Rapid prototyping.
Re-Use of parts of the existing application
Numpy/Scipy for handling large datasets.
Disadvantages:
Weakly typed language -> debugging can be painful
Server/Client communication (JSON parsing or using 3rd party libraries).
Python GIL -> scaling with concurrent requests ?
Server language (python) <> client language (java)
Java (Hibernate/JPA, Spring, etc)
Advantages:
One language for both client and server (Java)
"Easier" to debug.
Server/Client communication (RequestFactory, RPC) easer to implement.
Performance, multi-threading, etc
Object graph can be transfered (RequestFactory).
CRUD "easy" to implement
Multitear architecture (features)
Disadvantages:
Multitear architecture (complexity,requires a lot of configuration)
Handling of arrays/matrices (not sure if there is a pendant to numpy/scipy in java).
Not all features of the Java web application layers/frameworks used (overkill?).
I didn't mention any other backend systems (RoR, etc) because I think these two systems are the most viable ones for my use case.
To be honest I am not new to Java but relatively new to Java web application frameworks. I know my way around Pylons though in the new setup not much of the Pylons features (MVC, templates) will be used because it probably only serves as AJAX backend.
If I go with a Java backend I have to decide whether to do a RESTful service (and clearly separate client from server) or use RequestFactory (tighter coupling). There is no specific requirement for "RESTfulness". In case of a Python backend I would probably go with a RESTful backend (as I have to take care of client/server communication anyways).
Although mainly scientific data is going to be displayed (not part of any Domain Object Graph) also related metadata is going to be displayed on the client (this would favor RequestFactory).
In case of python I can re-use code which was used for loading and filtering of the scientific data.
In case of Java I would have to re-implement this part.
Both backend-systems have its advantages and disadvantages.
I would be thankful for any further feedback.
Maybe somebody has experience with both backend and/or with that use case.
thanks in advance
We had the same dilemma in the past.
I was involved in designing and building a system that had a GWT frontend and Java (Spring, Hibernate) backend. Some of our other (related) systems were built in Python and Ruby, so the expertise was there, and a question just like yours came up.
We decided on Java mainly so we could use a single language for the entire stack. Since the same people worked on both the client and server side, working in a single language reduced the need to context-switch when moving from client to server code (e.g. when debugging). In hindsight I feel that we were proven right and that that was a good decision.
We used RPC, which as you mentioned yourself definitely eased the implementation of c/s communication. I can't say that I liked it much though. REST + JSON feels more right, and at the very least creates better decoupling between server and client. I guess you'll have to decide based on whether you expect you might need to re-implement either client or server independently in the future. If that's unlikely, I'd go with the KISS principle and thus with RPC which keeps it simple in this specific case.
Regarding the disadvantages for Java that you mention, I tend to agree on the principle (I prefer RoR myself), but not on the details. The multitier and configuration architecture isn't really a problem IMO - Spring and Hibernate are simple enough nowadays. IMO the advantage of using Java across client and server in this project trumps the relative ease of using python, plus you'll be introducing complexities in the interface (i.e. by doing REST vs the native RPC).
I can't comment on Numpy/Scipy and any Java alternatives. I've no experience there.
I am building a new web application and playing around with the architecture and would like some opinions about splitting UI and business logic and running them on separate servers.
This means that if someone requests a page, the front end will itself request the data from a back-end server and then not actually perform any calculations/logic but just use the data to populate a template and then respond with that.
Back-End: Java + JAX-WS
Front-End: Kohana 3.1 (PHP)
Data Interchange Format: JSON
Advantages:
clear separation of logic and UI
ability to choose language/framework best suited for either end
possibility to add logic/UI servers depending on which one is the bottleneck in case of performance issues
possibility to make the API publicly available without any extra work (pseudo-internal requests will go to the same API as requests from third-part applications)
ability to change (if need be) the framework/language of either side without having to edit the other
ability to specify different server hardware according to the needs of the logic/UI application
better security (if API private) (??)
Disadvantages:
latency (??)
more servers
So what do you think? Is this a good idea? I haven't been able to find much information so far but my guess is that many big sites do it this way, right? How will performance be affected (I am thinking of running it on EC2)? What are further advantages/disadvantages? Any thoughts on the languages/frameworks choices?
A similar architectural pattern is often employed, though generally the UI part is often moved to the client. So you have a backend that responds with JSON, a quick http server with full-blown caching (and that can use html5 app caching as well) and a rich javascript client which requests the JSON from the backend and builds the UI.
More on this pattern: http://www.metaskills.net/2008/05/24/the-ajax-head-design-pattern/
The main negative of the approach is that is generally more work in the beginning - if you don't need an external API then using a simpler architecture will be easier to program.
You also might want to employ the idea of keeping your servers stateless and let the client side handle any state.
This simplifies the whole load-balancing and fail-over stuff and makes you think about a more resource-oriented architecture.
And if you are set on JSON already, you might want to explore the idea of NOT mapping POJOS to your data and use a document store like MongoDB or CouchDB to access JSON data directly.
I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data.
In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain).
So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...)
There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial?
Thanks in advance and sorry for the relatively broad question...
Chris
I'll be pretty specific here since I have some related experience. Not all of what I'll write will apply, but I'm hoping something does.
My 1. advice would be to keep any code that's directly dependent on any framework as "stupid" as possible. If you can, consider such code more or less disposable (implementation wise, API contracts exposed to clients needs to be stable of course).
Focus on what makes your application unique, and try to make that independent of GWT etc. The facade pattern is something I can recommend - keeping the app-specific logic behind one and exposing it by wiring the presentation layer with it has served us well. If your back-end depends on third party infrastructure (via web services etc), decouple those dependencies from your code with the adapter pattern.
I have spent most of my working time during the past 5 years on building something that matches what you described in many ways. Today it's more an app framework then an app - it has a few different browser interfaces (WAP/standard web+ajax/Facebook app), an interface for 2-way SMS usage, and a REST/XML interface for thick mobile clients - BREW, iPhone, Android and Blackberry.
When it comes to frameworks, for persistence, we have used Hibernate. All the different pieces of code are tied together with Spring. The browser interfaces have been ported from Struts (1.x) to Wicket. The SMS and mobile client interfaces are built on top of Restlet.
Using multiple different presentation layer frameworks (such as Wicket and Restlet in our case) has not been a problem, as long as that code is kept lean and business rules are kept out of it (to the extent possible). There is nothing that says that your browser interface has to be packaged into the same WAR as your mobile client interface - with Spring you can easily wire several web applications with the same facade. This has been helpful to us especially in allowing multiple developers to work on well isolated pieces of the application.
In my opinion, trying to achieve maximal reuse of code in the presentation layer has caused more harm than good. That has always been the most volatile part of our application, beyond what we have been able to expect.
I'm developing a REST webservice (Java, Jersey). The people I'm doing this for want to directly access the webservice via Javascript. Some instinct tells me this is not a good idea, but I cannot really explain that instinct. My natural approach would have been to have the webservice do the real logic and database access, but also have some (relatively thin) server-side script layer (e.g. in PHP). Clients would talk to the PHP layer which in turn would talk to the webservice. (The webservice would be pretty local to the apache/PHP server and implicitly trust calls from the script layer. The script layer would take care of session management.)
(Btw, I am not talking about just hiding the webservice behind an Apache which simply redirects calls.)
But as I find myself at a lack of words/arguments to explain my instinct, I wonder whether my instinct is right - note that while I have been developing all kinds of software in all kinds of languages and frameworks for like 17 years, this is the first time I develop a webservice.
So my question is basically: what are your opinions? Are there any standard setups? Is my instinct totally wrong? Or partially? ;P
Many thanks,
Max
PS: I might add a few bits of information about the planned usage of the whole application:
will be accessed by different kinds of users, partly general public, partly privileged
thus, all major OS/browser combinations can be expected as clients
however, writing the client is not my responsibility
will potentially have very high load/traffic
logic of webservice will later be massively expanded for another product which is basically a superset of the functionality of the current project
there is a significant likelihood that at some point an API should be exposed which can be used by 3rd party developers - obviously, with some restrictions
at some point, the public view of the product should become accessible via smartphones, too (in other words, maybe a customized version of the site to adapt to the smaller display and different input methods)
I don't think that accessing a REST webservice directly via e.g. JavaScript is
generally a bad idea, because that what the REST architecture is designed
for. For your usecase you might have some implications to consider:
Your webservice will have to take care of user management. Since the REST architecture does not support a server side session state you will have to do authentication and authorization on every request. Users will have to maintain their state on the client side.
Your webservice implementation will have to take care of issues like caching and load balancing and all the other things you might have assigned to e.g. the PHP "proxy" script
For your requirements:
all major OS/browser combinations can
be expected as clients
Since you webservice will only deliver data (e.g. JSON or XML) this should not be a problem. The JavaScript part just has to take care to issue the correct requests.
will potentially have very high
load/traffic
If you strictly follow the REST architecture you can make use of http caches. But keep in mind that the stateless nature will always cause more traffic.
logic of webservice will later be
massively expanded for another product
which is basically a superset of the
functionality of the current project
The good thing about open webservices is that you can loosely couple them together.
there is a significant likelihood that
at some point an API should be exposed
which can be used by 3rd party
developers - obviously, with some
restrictions
Again, with RESTful webservice you already have an API exposed for developers. It is on your clients to decide if this is a good or a bad thing.
at some point, the public view of the
product should become accessible via
smartphones
Another pro for making your REST webservice publicly accessible. Most smartphone APIs support HTTP requests, so you will just have to develop the GUI for the specific smarphone platform that makes direct calls to the webservice.
Firstly I am just extending on what Daff replied above. I am extending Daff's answer from the point of my learning or designing and implementing RESTful WebServices and please note that I am still learning.
When I started learning RESTful WS with Java, Jersey (0.3 IIRC), I had similar questions and the primary cause for that is "Total" mis-conception about RESTful Architecture. The most "Grave" mistake I performed was using JAXB for XML and Jackson for JSON (de)serialization directly from/to the persistence beans. This totally violates the REST principal and hence creating some vital issues in creating a high performance, highly available, scalable web service.
My mistake was, thinking in terms of API a.k.a Service, when we think RESTful WS we should forget "API" and think Resources. We should take great care in interlinking resources. My understanding of this only came after reading this, I suggest it to anyone wanting to create their own web service. My conclusion is what is Resource is to RESTful WS/Architecture what API to a native interface or SOAP Web Service. So I would suggest design your resources with care and understand that there is no limit in how resources your WebService may have.
So here comes how I concluded in implementing systems exposing an "API" through RESTful WS. I create an API which deals communicating with business entities, for example, PersistentBook, which contains either Id of PersistentAuthor or the object itself. All business logic considering persistent entities lie in the API implementation layer.
The web service layer uses the API layer to perform its operations on resources. Web service layer uses persistent entities to generate representations of beans and vice versa, the key feature here would be PersistentBook's representation would have a URI to the PersistentAuthor. If I want to use automated (de)serialization I create another domain layer, e.g. Book, Author etc.
Now as Daff mentioned caching would be inevitable, my checkpoints for them are -
Support for 'Cache-Control', 'Last-Modified', 'ETag' response headers and 'If-Modified-Since', 'If-Match-None' request headers are key. Note from my more recent learnings - use 'Vary' header in case of varying representations (content negotiation) based on 'Accept' header.
Using a server side caching such as Squid, Varnish in case clients do not use caching. One thing I learnt having all the right header support counts for nothing if clients do support them and in fact increases the cost in terms of computation and badnwidth ;)
Use of Content-Encoding.