Have frontend WebApplication developed in backbone which hits backend REST API in order to eg. download data from webservice to user interface table.
In Intellij have set maven project with two modules - one for functional selenium (webdriver/java) tests and second for rest.
What I am planning to do is to create under rest module some class which could call relevant rest API json method, put somewhere what was returned and under selenium module assert that with what ui table displays. This is kind of integration test.
But ... this is theory , in real life have doubts if it could work like I've decribed and what should I use in order to download data from REST - I've been thinking about RestAssured or about SoapUI ...but maybe you could advise something what should be used (and how) ?
Best way to deal with these kind of problem is :
1) Generate Java classes for your request and response json objects using below link
2) populate the request and call jersey api to populate response object.
3) once you get response object, create one more response oject by calling UI
4) Compare these two objects and assert
You may use Jasmine or Dredd for that.
Jasmine is a BDD oriented framework for javascript testing. It's very usefull for testing your javascript components, that includes calling your API throught your web framework.
Dredd offers more than that and a different approach, but it also may be used for that.
You can also use simple Java + Junit + Gson unit testing (even use some BDD framework like concordion) and you could also get the work done. Even using some RAML based tool.
Although they're not the same they could offer what you need. Other non framework based alternatives are fiddler, soapui.
Related
In client-server applications with spring-boot and angular. Most resources I find explain how to expose REST endpoint from spring boot and consume it from angular with an http client.
Most of the time, communicating in JSON is preconized, maintaining DTOs (DataTransfertObject) in both angular and spring boot side.
I wonder if people with fullstack experience knows some alternative avoiding maintaining DTOs in both front and backend, maybe sharing models between the two ends of the application?
Swagger would be a good tool to use here.
You can take a code-first approach which will generate a swagger spec from your Java controllers & TOs or a spec-first approach which will generate your Java controllers & TOs from a swagger spec.
Either way, you can then use the swagger spec to generate a set of TypeScript interfaces for the client side.
As Pace said Swagger would be a great feature that you can use. In this case plus having great documentation for the API endpoints you can sync object models between frontend and backend. You have to just use the .json or .yaml file of Swagger to generate the services and object models on the frontend side by using ng-swagger-gen.
Then put the command for generating services and object model in package.json when you wanna build or run your application for instance:
...
"scripts": {
...
"start": "ng-swagger-gen && ng serve",
"build": "ng-swagger-gen && ng build -prod"
...
},
...
So after running one of these commands you will have updated object models and if an object property name or type changed, add/remove object property you will get the error and you have to fix it first then move forward.
Note: Just keep in mind the services and object models will be generated based on the Swagger file so it always should be updated.
PS: I was working on a project that we even every code on the backend side was generated based on the Swagger file ;) so they just change the Swagger file and that's it.
This is a difficult topic, since we are dealing with two different technology stacks. The only way I see is to generate those objects from a common data model.
Sounds good, doesn't work. It is not just about maintaining dto's.
Lets say api changes String to List. It is not enough to update your typescript dto from string to string[]. There is logic behind string manipulation that now needs to handle list of strings. Personally i do not find that troublesome to maintain dto's on both sides. Small tradeoff for flexibility and cleaner code (you will have different methods in different dto's)
I have a Java client that allows indexing documents on a local ElasticSearch server.
I now want to build a simple Web UI that allows users to query the ES index by typing in some text in a form.
My problem is that, before calling ES APIs to issue the query, I want to preprocess the user input by calling some Java code.
What is the easiest and "cleanest" way to achieve this?
Should I create my own APIs so that the UI can access my Java code?
Should I build the UI with JSP so that I can directly call my Java
code?
Can I somehow make ElasticSearch execute my Java code before
the query is executed? (Perhaps by creating my own ElasticSearch plugin?)
In the end, I opted for the simple solution of using Json-based RESTful APIs. Time proved this to be quite flexible and effective for my case, so I thought I should share it:
My Java code exposes its ability to query an ElasticSearch index, by running an HTTP server and responding to client requests with Json-formatted ES results. I created the HTTP server with a few lines of code, by using sun.net.HttpServer. There are more serious/complex HTTP servers out there (such as Tomcat), but this was very quick to adopt and required zero configuration headaches.
My Web UI makes HTTP GET requests to the Java server, receives Json-formatted data and consumes it happily. My UI is implemented in PHP, but any web language does the job, as long as you can issue HTTP requests.
This solution works really well in my case, because it allows to have no dependencies on ES plugins. I can do any sort of pre-processing before calling ES, and even post-process ES output before sending the results back to the UI.
Depending on the type of pre-processing, you can create an Elasticsearch plugin as custom analyser or custom filter: you essentially extend the appropriate Lucene class(es) and wrap everything into an Elasticsearch plugin. Once the plugin is loaded, you can configure the custom analyser and apply it to the related fields. There are a lot of analysers and filters already available in Elasticsearch, so you might want to have a look at those before writing your own.
Elasticsearch plugins: https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-plugins.html (a list of known plugins at the end)
Defining custom analysers: https://www.elastic.co/guide/en/elasticsearch/guide/current/custom-analyzers.html
Regarding to the best way to design a system using spring-mvc (REST services) and jQuery . I think exists the following approaches.
One war file in which you have spring services and jQuery stuff, with this approach we have all the domain objects available to be used with spring-mvc, we can create initial jsp pages and then refresh some elements using jQuery calls to our services.
Two war files, one having the spring services and the other contains spring-mvc stuff and jquery, in this case the creation of pages could be done by jsp pages and also refresh elements with jquery calls to our services, but to make this possible we need to have a common library of domain objects to be used in the second war, also internally use restTemplate in some controllers that need to be created (It sounds like duplicate code).
Have one war file running the REST services and a other “package” without any java or spring stuff only jquery, it means all the call and information retrieval must to be done using jquery, initial jsp pages creation cannot be done with this option and all the content are obtained via REST services. (no need of use internal controllers to call services by java)
Thinking about it I realized that one and second have the following disadvantages.
Have services and web stuff in the same war file sound like a bad idea thinking in SOA, the movement of this war will result in move unneeded jquery and web stuff.
Have jsp and jquery stuff mixed not sound like a good idea but I think is a common practice (I wonder why?), using this I think we need to create some controllers in the second war to initially create the web pages, go using restTemplate to obtain initial information and then update or refresh using jquery calls. It feels that a have a controller just to retrieve data to the services, why don’t go directly …
I just want to implement the third approach but the question is: there is any disadvantages that I’m not seeing or any
advice that I should know before use that approach? Also there is any suggestion to handle this kind of systems it will be great to hear something from you, coming from java and jquery developers
I agree with you that version 3 gives you the most flexibility and is what you would typically see in the design world.
Treat the rest and the front end as separate applications entirely. If done correctly, you can have a very robust application capable of proper agility.
Version 1: Load the page in an initial controller call, and use jquery to make subsequent service calls. All code exists within one package.
The disadvantage is tight coupling. You are now restricted to the language of your api, and no longer providing a service based approach to your data and services.
I have seen this version applied mostly when the application developer cares more about async front end calls than a SOA based language.
Version 2: Have a war containing Spring Services, and a war for the JS.
The issues with this method can be overcome with the use of a jar instead of another server application. Though this approach is commonly used, the draw backs are still reliance on external packaging.
Using a jar that contains all the code to hit databases and create domain objects separate from the code that the controllers use to serialize and respond to web requests creates a very clean way to manage your api, however this creates a complexity and an extra component that can be avoided using version 3. It also gives the same odd behavior you see in version 1.
I have seen this approach taken by teams developing pure api applications. I have not seen this done on teams that also require a front end component. Method one or three has been used in these cases.
Version 3: Create an application that deals with just the front end responsibility Create an application that handles the server side responsibility.
In both version 2 and version 3, separate your service calls from your http calls. Make them different because it allows modularity.
For instance, we need to respond to http Requests
#Controller
class MyController{
#Autowired
private MyService service;
#GET
public String getData(String dataId){
return service.getData(dataId);
}
}
and we need to respond to active mq requests
Message m = queueReceiver.receive();
if (m instanceof DataRequest) {
DataRequest message = (DataRequest) request;
queueSender.send(service.getData(request.getDataId())); //service call
} else {
// Handle error
}
Also, it gives you the ability to manage what you need to handle on the http side different from your service side.
#GET
public String getData(HttpRequest request, String dataId){
if(!this.handleAuth(request)){
throw new 403();
}
try{
return service.getData(dataId);
catch(Exception e){
throw new WrappedErrorInProperHttpException(e);
}
}
This allows your service layer to handle tasks meaningful to just those services without needing to handle all the http crap. And lets you deal with all the HTTP crap separate from your service layer.
I have developed a simple Spring MVC RESTful API and now I moved to the stage to create a simple GWT project to perform some requests to this api and obviously I choose that the communication will be done by exchanging JSON messages.
When receiving a response I will have to unmarshall it to a POJO.
I am aware that the general approach is to create the so called 'overlay types' but that looks to me as a mere duplicate of the java classes I wrote in api.
So the question is:
why shouldn't I simply create a common api that simply contains the common classes to perform this marshalling/unmarshalling?
I can clearly see that the main benefit is that if any change is needed you won't have to change also the overlay types.
Assuming that you can define interfaces for your pojo, you can share those Interfaces in client and server side (common package)
In server side you have to code your implementations which are used for the RESTful api.
In client side, the implementation of those interfaces can be done automatically with generators. For this you can use gwtquery databinding or gwt autobeans.
To request your RESTful api, you can use either gwtquery ajax or gwt requestbuilder
Each option has its advantages, normally I use gwtquery because its simplicity and because its databinding approach is more lightweight, otherwise, with autobeans you can create your POJOS using autobeans factories in both client and server sides. If you already have developed your backend this is not a goal for you though.
The REST response can be consumed by any client and not specifically one client. If I understand your question correctly, you want to build the logic of marshalling and unmarshalling inside your REST API. Ideally it violates Single Responsibility Principal. You might need to change the mapping logic if the service changes so you are touching two different aspects of an API where as only one component requires change.
Also, the REST API should ideally be designed to be client agnostic. It is your specific requirement to translate them to POJO but another client might want to consume it as simple plain JSON. If you provide an overlay type, your code will be quite loosely coupled.
If your server side class (Player for example) can be serialized/desirialized without any problems, then you can send it to client side without any overlay type / conversion (serialization to JSON on server -> transport -> desirialization from JSON on client). On client side you can use RestyGWT for example to archieve automatic desirialization process. Overlay types and conversion process are necessary only in the case when Player instance cannot be serialized (for example it is backed by Hibernate).
I have a PHP web application environment. I am using Slim Framework as REST interface for my application. My application front-end is written using Backbone.js and jQuery.
There is a utility (.jar file) which when I use command line makes a remote call (I guess this is a Web Service) which returns me the data.
how do I best incorporate this into my webapplication described on top?
My application front end will have a Button that should make an AJAX call to the REST Interface and fetch the data as JSON.
My approach:
PHP-REST interface url is: /api/phprestapi.php exists
Add a JAVA-REST interface at url: /api/javarestapi.java (Perhaps) to separate these two
Existing Environment: LAMP Stack on Ubuntu
How do I achieve this? What is the kind of effort involved?
Thanks for your pointers
If I understand you correctly, you need to be able to return the data being output from the jar into php. If that is the case then you should start looking at the different ways to execute a program from php [1]. exec is probably the most well known.
If you want further control, I would recommend learning more about the web service being called by the jar and doing the call to the web service in php. However, this would take a lot more time than the first option above.
[1] http://php.net/manual/en/ref.exec.php