I have a PHP web application environment. I am using Slim Framework as REST interface for my application. My application front-end is written using Backbone.js and jQuery.
There is a utility (.jar file) which when I use command line makes a remote call (I guess this is a Web Service) which returns me the data.
how do I best incorporate this into my webapplication described on top?
My application front end will have a Button that should make an AJAX call to the REST Interface and fetch the data as JSON.
My approach:
PHP-REST interface url is: /api/phprestapi.php exists
Add a JAVA-REST interface at url: /api/javarestapi.java (Perhaps) to separate these two
Existing Environment: LAMP Stack on Ubuntu
How do I achieve this? What is the kind of effort involved?
Thanks for your pointers
If I understand you correctly, you need to be able to return the data being output from the jar into php. If that is the case then you should start looking at the different ways to execute a program from php [1]. exec is probably the most well known.
If you want further control, I would recommend learning more about the web service being called by the jar and doing the call to the web service in php. However, this would take a lot more time than the first option above.
[1] http://php.net/manual/en/ref.exec.php
Related
I have a Java client that allows indexing documents on a local ElasticSearch server.
I now want to build a simple Web UI that allows users to query the ES index by typing in some text in a form.
My problem is that, before calling ES APIs to issue the query, I want to preprocess the user input by calling some Java code.
What is the easiest and "cleanest" way to achieve this?
Should I create my own APIs so that the UI can access my Java code?
Should I build the UI with JSP so that I can directly call my Java
code?
Can I somehow make ElasticSearch execute my Java code before
the query is executed? (Perhaps by creating my own ElasticSearch plugin?)
In the end, I opted for the simple solution of using Json-based RESTful APIs. Time proved this to be quite flexible and effective for my case, so I thought I should share it:
My Java code exposes its ability to query an ElasticSearch index, by running an HTTP server and responding to client requests with Json-formatted ES results. I created the HTTP server with a few lines of code, by using sun.net.HttpServer. There are more serious/complex HTTP servers out there (such as Tomcat), but this was very quick to adopt and required zero configuration headaches.
My Web UI makes HTTP GET requests to the Java server, receives Json-formatted data and consumes it happily. My UI is implemented in PHP, but any web language does the job, as long as you can issue HTTP requests.
This solution works really well in my case, because it allows to have no dependencies on ES plugins. I can do any sort of pre-processing before calling ES, and even post-process ES output before sending the results back to the UI.
Depending on the type of pre-processing, you can create an Elasticsearch plugin as custom analyser or custom filter: you essentially extend the appropriate Lucene class(es) and wrap everything into an Elasticsearch plugin. Once the plugin is loaded, you can configure the custom analyser and apply it to the related fields. There are a lot of analysers and filters already available in Elasticsearch, so you might want to have a look at those before writing your own.
Elasticsearch plugins: https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-plugins.html (a list of known plugins at the end)
Defining custom analysers: https://www.elastic.co/guide/en/elasticsearch/guide/current/custom-analyzers.html
Regarding to the best way to design a system using spring-mvc (REST services) and jQuery . I think exists the following approaches.
One war file in which you have spring services and jQuery stuff, with this approach we have all the domain objects available to be used with spring-mvc, we can create initial jsp pages and then refresh some elements using jQuery calls to our services.
Two war files, one having the spring services and the other contains spring-mvc stuff and jquery, in this case the creation of pages could be done by jsp pages and also refresh elements with jquery calls to our services, but to make this possible we need to have a common library of domain objects to be used in the second war, also internally use restTemplate in some controllers that need to be created (It sounds like duplicate code).
Have one war file running the REST services and a other “package” without any java or spring stuff only jquery, it means all the call and information retrieval must to be done using jquery, initial jsp pages creation cannot be done with this option and all the content are obtained via REST services. (no need of use internal controllers to call services by java)
Thinking about it I realized that one and second have the following disadvantages.
Have services and web stuff in the same war file sound like a bad idea thinking in SOA, the movement of this war will result in move unneeded jquery and web stuff.
Have jsp and jquery stuff mixed not sound like a good idea but I think is a common practice (I wonder why?), using this I think we need to create some controllers in the second war to initially create the web pages, go using restTemplate to obtain initial information and then update or refresh using jquery calls. It feels that a have a controller just to retrieve data to the services, why don’t go directly …
I just want to implement the third approach but the question is: there is any disadvantages that I’m not seeing or any
advice that I should know before use that approach? Also there is any suggestion to handle this kind of systems it will be great to hear something from you, coming from java and jquery developers
I agree with you that version 3 gives you the most flexibility and is what you would typically see in the design world.
Treat the rest and the front end as separate applications entirely. If done correctly, you can have a very robust application capable of proper agility.
Version 1: Load the page in an initial controller call, and use jquery to make subsequent service calls. All code exists within one package.
The disadvantage is tight coupling. You are now restricted to the language of your api, and no longer providing a service based approach to your data and services.
I have seen this version applied mostly when the application developer cares more about async front end calls than a SOA based language.
Version 2: Have a war containing Spring Services, and a war for the JS.
The issues with this method can be overcome with the use of a jar instead of another server application. Though this approach is commonly used, the draw backs are still reliance on external packaging.
Using a jar that contains all the code to hit databases and create domain objects separate from the code that the controllers use to serialize and respond to web requests creates a very clean way to manage your api, however this creates a complexity and an extra component that can be avoided using version 3. It also gives the same odd behavior you see in version 1.
I have seen this approach taken by teams developing pure api applications. I have not seen this done on teams that also require a front end component. Method one or three has been used in these cases.
Version 3: Create an application that deals with just the front end responsibility Create an application that handles the server side responsibility.
In both version 2 and version 3, separate your service calls from your http calls. Make them different because it allows modularity.
For instance, we need to respond to http Requests
#Controller
class MyController{
#Autowired
private MyService service;
#GET
public String getData(String dataId){
return service.getData(dataId);
}
}
and we need to respond to active mq requests
Message m = queueReceiver.receive();
if (m instanceof DataRequest) {
DataRequest message = (DataRequest) request;
queueSender.send(service.getData(request.getDataId())); //service call
} else {
// Handle error
}
Also, it gives you the ability to manage what you need to handle on the http side different from your service side.
#GET
public String getData(HttpRequest request, String dataId){
if(!this.handleAuth(request)){
throw new 403();
}
try{
return service.getData(dataId);
catch(Exception e){
throw new WrappedErrorInProperHttpException(e);
}
}
This allows your service layer to handle tasks meaningful to just those services without needing to handle all the http crap. And lets you deal with all the HTTP crap separate from your service layer.
I am looking to implement a way to transfer data from one application to another programmatically in Google app engine.
I know there is a way to achieve this with database_admin console but that process is very time inefficient.
I am currently implementing this with the use of Google Cloud Storage(GCS) but that involves querying data, saving it to GCS and then reading from GCS from different app and restoring it.
Please let me know if anyone knows a simpler way of transferring data between two applications programmatically.
Thanks!
Haven't tried this myself but it sounds like it should work: Use the data_store admin to backup your objects to GCS from one app, then use your other app to restore that file from GCS. This should be a good method if you only require a one time sync.
If you need to constantly replicate data from one app to another, introducing REST endpoints at one or both sides could help:
https://code.google.com/p/appengine-rest-server/ (this is in Python, I know, but just define a version of your app for the REST endpoint)
You just need to make sure your model definitions match on both sides (pretty much update the app at both sides with the same deployment code) and only have the side that needs to sync data track time of last sync and use the REST endpoints to pull in new data. Cron Jobs can do this.
Alternatively, create a PostPut callback on all of your models to make a POST call every time a model is written to your datastore to the proper REST endpoint on the other app.
You can batch update with one method, or keep a constantly updated version with the other method (at the expense of more calls).
Are you trying to move data between two App Engine applications or trying to export all your data from App Engine so you can move to a different hosting system? Your question doesn't have enough information to understand what you're attempting to do. Based on the vague requirements I would say typically this would be handled with a web service that you write in one application that exposes the data and the other application calls that service to consume the data. I'm not sure why Cloud Endpoints was down voted because that provides a nice way to expose your data as a JSON based web service with a minimum of coding fuss.
I'd recommend adding some more details into your question like exactly what are you trying to accomplish and maybe a mock data sample.
You could create a backup of your data using bulkloader and then restore it on another application.
Details here:
https://developers.google.com/appengine/docs/python/tools/uploadingdata?csw=1#Python_Downloading_and_uploading_all_data
Refer to this post if you are using Java:
Downloading Google App Engine Database
I don't know if this could be suitable for your concrete scenario, but Google Cloud Endpoints are definitely a simple way of transferring data programmatically from Google App Engine.
This is kind of the Google implementation of REST web services, so they allow you to share resources using URLs. This is still an experimental technology, but as long as I've worked with them, they work perfectly. Moreover they are very well integrated with GAE and the Google Plugin for Eclipse.
You can automatically generate an endpoint from a JDO persistent class and I think you can automatically generate the client libraries as well (although I didn't try that).
my current java code is deployed to the DB and the plsql code calls it and uses it.
I need to get the Java code out of the database and still be able to use it.
The options I had in mind are:
Web services
Java Stored Procedures
Calling OS command using pl/sql
RMI
What is your recommendation?
please add cons and pros.
thanks,
Leeran
my current java code is deployed to
the DB and the plsql code calls it and
uses it. I need to get the Java code
out of the database and still be able
to use it.
Can you clarify what "get the Java code out of the database" means to you? Is your intent to package it in a JAR and remove it from the database entirely? It's not clear to me whether or not you wish to move this operation to the middle tier and not have PL/SQL call it anymore.
Here are my thoughts on your proposed choices:
Web services - moves the operation out of the database and onto the middle tier.
Java Stored Procedures - I don't understand this. If it's in the database now, how does this change anything? You can certainly have any client call that PL/SQL, which in turn will invoke your Java code. It's a question of whether it's more efficient to perform that operation on the middle tier or on the database server.
Calling OS command using pl/sql - I don't understand this at all.
RMI - just another remoting choice, just like web services. If you encapsulate the operation as a Java POJO service you can remote the code any way you wish. This means only Java clients. A web service can accept a request from any client that can speak HTTP, including non-Java clients.
It isn't completely clear what you want to do or why. If you just want to re-use some code, as part of your build procedure, fetch the java from the database so it can be compiled and included in your jar/war file.
i have an ASP.NET web service that returning a custom entity object (Staff):
[WebMethod]
public Staff GetStaffByLoginID(string loginID){}
how would i consume this in Java?
thanks!
ASP.NET automatically generates a WSDL that contains the interface definitions for your web methods and the types they consume/return.
Apache Axis provides a tool called WSDL2Java that will generated the all of the code you need to consume the webservice. Simply point it to:
http://yoursite.com/YourWebService.asmx?WSDL
If you browse directly to the .ASMX file, you'll get a nice test harness that you can use to explore the various methods you can call.
Once Axis reads your WSDL, it will generate some proxy classes, one of them will be based on the interface of Staff.
However, I would not use this class as your actual business object, and instead would wrap access to the web service through a service layer. This service layer would use the proxy Staff class to populate your real business object.
This protects your consuming code from any interface changes that may happen to the web service in the future, keeping the actual area of code that would be modified as small as possible.
I do this for a living, interopping between Java and .NET on many platforms using SOAP.
EDIT: Why the is this downvoted? It's the only correct answer here.
Just use Standard WSDL as mentioned by flyswat if you are using traditional asmx web services.
other solutions if not using standard ASP.NET Web Services:
Use REST
http://www.infoq.com/articles/REST-INTEROP
http://www.codeproject.com/KB/XML/WSfromJava.aspx
Make sure the objects are serializable and as long as the you can cast it to a similar class on the Java side, you are good. Else, you might have to write some custom class mappers in Java.
You may be able to do this by running Java on IKVM.