We have a large system of physical devices which all run a web service for control and a central control system for controlling these devices. I need to make a substitute for such a physical device in order to test the controlling unit. How will I go about running more than one instance of a test device on a single computer. The protocol used in SOAP with a wsdl written in stone. In addition to the webservice each test device needs a webserver to monitor state and generating events.
My first approach is to embed jetty and use axis2 for webservices, but I am having some trouble making that fly. I managed to get the axis2 SimpleHttpServer working with a webservice, but as far as I can tell SimpleHttpServer will not let me run Servlets or let alone wars. Is there a better approach I am missing?
I considered making a proxy server listening on any number of ports and forwarding the request to a webservice to a central webservice with an additional paramater saying which port the request originated from, but since the wsdl is writting in stone I can not pass this paramater along.
EDIT: I am using Netbeans to generate a webservice for me. Works as a charm but not enough for my project. For some reason wsimport chokes on the wsdl. I don't understand how Netbeans can deploy to the bundled Glassfish server, but if I drop the generated dist/my-project.war into tomcat the webservice doesn't work. Much less show up in web.xml. What am I missing?
Be aware that if you route your network requests through a SOCKS proxy, you can essentially redirect even hardcoded names and ports in the SOCKS proxy to whatever you need.
Axis2 is not meant to be used as a servlet container, so using SimpleHttpServer doesn't help you there.
But Jetty is a full featured Servlet container. If you want to make it work, you have to run your Wars with Jetty. (Or any other servlet container, but Jetty is perfectly fine)
I'm not Jetty expert, but this should work:
Server server = new Server(8080);
Context root = new Context(server, "/", Context.SESSIONS);
root.addServlet(new ServletHolder( yourServletInstance ), "/*");
server.start();
(Taken from Jetty Wiki)
Ok I've figured out a solution. I can use Glassfish. Then I deploy the same webapp multiple times under different names. Then I have a small proxy made in glassfish which listens on a number of ports and then translates the request to one of the instances running i glassfish.
Related
Grpc Server seems to be implemented using netty. Is there a way to use other implementations ?
Netty is the only supported server. You can either have two separate ports (one for your other server, one for gRPC) or could reverse proxy from your other server to the Netty server.
There is work underway (tracking issue) to allow serving using the Servlet API, so then any Servlet Container could be used. But there are restrictions, like the needing to be the root ('/') webapp. It is far enough along to test it and provide feedback, but there also may be some gaps in the implementation.
The scenario is this: I'm developing a Java EE application with an Angular 2 frontend. The client has an Apache server which is usually used to serve static resources and an Oracle Weblogic for the dynamic part. The problem is that by default the Angular 2 App and the Weblogic server will not be able to talk each other due to the Same Origin Policy.
So far I have 3 possible deployment approaches in mind:
Set up a Reverse Proxy in Apache to point the REST endpoints to Weblogic
Package the Angular App in a WAR/EAR and deploy it to Weblogic. So I would end up with something like: myserver/myapp for the UI and myserver/myapp-rest for the Backend.
Package the Angular App in the same WAR as the Java backend. So I would end up with myserver/myapp for the UI and myserver/myapp/api for the REST endpoints.
There is a 4th option which would be setting up CORS, but I'm worried about the security using that approach.
Which is the right approach?
If you are allowed to make infra decisions , change apache webserver to nginx , we switched to nginx and got lot of added values in terms of concurrent processing.
In our project the angular client is served by nginx webserver which talks to java backend hosted on tomcat 8.x(our app server) , also there are couple of tiers after app-server a separate DB server and an elastic search server.
Don't feel intimidated to set up CORS, you will eventually need to allow some origins requests which don't originate on your domain and port.
If your java tech stack has spring mvc , then setting up CORS is just a matter of adding few lines of configuration. You can even hardcode your angular url to allow backend server to serve requests only from your angular URL.
In normal JavaEE world, CORS is just another filter or interceptor where you can set response headers with all the allowed origins, http methods etc. It's very simple you can look it up.
For your given choices
seems plausible and a value addition that you get is you can
delegate SSL encryption to proxy server .
seems rather odd, you would want to separate the static content server from dynamic contents server, your angular js bundles, assets
etc are mostly static, if you keep your static server separate then
you can configure cookie-less domains down the line that would make
serving a lot faster.
3 same as 2.
I would strongly suggest the CORS option , from my past experiences.
I have no idea if it is possible but I heard that the best practice is to create a frontend project and a backend project as two independence projects. To do that, I should use Nginx, right? But how exactly do that and how exactly is it works?
I just create an angular2 project with node.js and start the server to listen to for example 80 port.
Then I create a java project with jetty and start the server to listen to for example 90 port.
Then, should I in some way create Nginx project to merge frontend and backend? I need help cause I'm afraid I can't understant how to do that.
It looks like you're mixing up a few things here:
In general it would make sense that your project is either written in JavaScript, and is running in Node.js, or written in Java (or a JVM language) as a Servlet, in which case it will run inside a Servlet Container like Jetty or Tomcat.
A web server like nginx or Apache httpd can be placed in front of the backend service in order to handle static content, provide caching, security, load balancing etc.
I have several existing web-apps deployed as standalone war files in the app container (resin). Some use axis2 jar files and axis2-generated Stub files to make calls to external SOAP based web services. They were all working fine prior to this.
I recently deployed axis2.war to the same container in order to create web services (unrelated to the client code mentioned above).
As soon as I restart the app container, client calls to the external webservices seem to be "intercepted" by my newly deployed axis2.war.
Services appear temporarily in the "Available Services" page of the axis2 web-app, with what appears to be randomly generated name based on the original external webservice name.
These services disappear shortly after, but the result is that my client code fails with a 500 error, as the local Axis2.war doesnt know how to handle those requests.
I have been searching for 2 days and haven't found mention of anyone experiencing anything similar. I am not even sure how to explain what is going on, as my client code never references localhost to make those web service calls. I am assuming this has to do with some configuration in axis2.war?
If anyone has any idea or insight into what might be happening, I would really appreciate any information.
Thanks for your help
I am inheriting a project involving a Java web app whose backend is powered by an Apache httpd/Tomcat combo. The web server is being used to serve back JS, static content, and to perform general load balancing, and Tomcat is serving back JSPs via a single WAR file.
I will be receiving access to the code base later on today or tomorrow, but wanted to try and do some research ahead of time.
My question can be summed up as: how do these two work together?
Who first receives HTTP requests?
How does httpd know when to forward JSP requests on to Tomcat, or to just respond to a request itself?
How does httpd "pass" the request to, and "receive" the response from, Tomcat? Does it just "copy-n-paste" the request/response to a port Tomcat is listening on? Is there some sort of OS-level interprocess communication going on? Etc.
These are just general questions about how the technologies collaborate with each other. Thanks in advance!
Who first receives HTTP requests?
Apache, almost certainly. There could be admin processes that talk directly to Tomcat, though.
How does httpd know when to forward JSP requests on to Tomcat, or to just respond to a request itself?
From its configuration. The specifics will vary. It might, for instance, be using mod_jk or mod_jk2, in which case you'll find JkMount directives in the config files, e.g.:
JkMount /*.jsp ajp13_worker
...which tells it to pass on requests at the root of the site for files matching *.jsp to the ajp13_worker, which is defined in the workers.properties file.
Or it could be set up in a simple HTTP reverse-proxy arrangement. Or something else.
How does httpd "pass" the request to, and "receive" the response from, Tomcat?
It depends on the configuration; it could be HTTP, it could be AJP, or it could be using some other module.
Does it just "copy-n-paste" the request/response to a port Tomcat is listening on?
Sort of. :-) See the reverse-proxy link above.
Is there some sort of OS-level interprocess communication going on?
Yes. AFAIK, it's all socket-based (rather than, say, shared memory stuff), which means (amongst other things) that Tomcat and Apache need not be running on the same machine.