We've got a lot of applications based on Jetty. For historical reasons; these have been sitting behind Apache servers. One of the duties of Apache has been rewrites, and we want to move to using just Jetty. But we are hitting some snags when it comes to porting it. Specifically Cookie path rewrites. Is it even possible in Jetty?
The original config for Apache looks like this:
ProxyPassMatch ^/${basePattern}/${market}/(${appContextName}/.*) http://127.0.0.1:8080/app/${symbol_dollar}1 retry=0
ProxyPassReverse / http://127.0.0.1:8080/
ProxyPassInterpolateEnv on
ProxyPassReverseCookiePath /appCookiePath /${basePattern}/${market}/${appContextName} interpolate
I've looked at the code for both Rule (from the rewrite API) as well as Jetty Handlers. I can't really find anything in those APIs that would let me rewrite the Cookies... Any pointers?
There is no built in feature of Jetty for rewriting Set-Cookie headers or Cookie headers.
If you are not afraid of Java code, you could create a CookiePathHandler that is placed at the start of the Server handler list to perform this logic for you in java code.
Pop into jetty-users mailing list or #jetty on chat.freenode.net for help (if you've never written a Jetty handler before)
Related
We're using Websphere to host a series of applications for us:
We have an API that's hosted at the root, written in Java
We have an Angular application hosted at /admin
We have another Angular application hosted at /marketing
My concern is about deep-linking. If a user is at /marketing/products/1, and they refresh their browser or share the link, I need the server to send that route to the correct Angular application so it can be generated correctly.
In a simpler setup, where the Angular application is living at the root, I would use the Java application's web.xml file to redirect traffic to "/". But in my current scenario, I need traffic for the marketing site to go to "/marketing", not just to "/". Just like a deep-link from the admin site would need to go to "/admin".
Furthermore, the base URLs for these Angular applications are subject to change, and we also plan to add additional Angular sites to this same server. So I'm really looking for a solution that can work dynamically and have the server redirect to the first "slug" in the URL rather than matching specific directories.
Any ideas? (And please excuse and correct any misconceptions I've demonstrated above -- I currently know very little about WebSphere)
I can see a couple possible ways forward.
You could still use the error-page directive in web.xml, but specify the URL of a servlet in your application that could do the inspection manually and issue a redirect as appropriate. How the list of context roots is provided to your app will differ based on how it's packaged, but it could be done using files, environment variables, or JNDI entries in server.xml.
If the URLs could be changed, the Angular apps could be changed to use HashLocationStrategy in their routers which would sidestep the error page. It doesn't seem likely that that's the case but I'll put it here to get it out of the way.
You could consider splitting each Angular app into its own .war file and configuring the context root in the webApplication element in server.xml. Then redirecting to / in web.xml would work since that / is relative to the context-root.
We ended up combining those separate Angular applications into 1 so that WebSphere could direct everything to "/" and Angular routing could handle everything from there.
I am using Wildfly 10.1.0.Final on the Ubuntu 16.04.02 LTS Server, I put the SSL from letsencrypt.org running with the H2 (HTTP 2) protocol and Spring Security 4.2.2.RELEASE in Production.
The server is working fine with a very good performance and is not slow but I'm getting many bug reports in the wildfly log of java.net.URISyntaxException (Error 500), from user agents like "Mozilla / 5.0 Jorgee".
I would like to know how I can block these type of bad user agents (malware, bots, etc) and(or) prevent this from happening.
Thanks in advance for all the help.
I had the same problem multiple times recently and the requests origin were random (Brazil, Germany, Argentine, US, Ireland...). I'm not sure if there is a way to blacklist these request within WildFly configuration however, you may want to consider to create a custom Java EE Filter.
The solution to my problem was to get a WAF in front of the CDN but if you don't have one you may want to add Nginx in front of your Web App and blacklist the user agent "Jorgee" as well as paths such as:
/2phpmyadmin/
/admin/phpMyAdmin/
...
You can find more info in this blog post by Kurtis Rader.
I have a vps from which I would like to serve a wordpress site, and a Tapestry5 webapp.
I usually use jetty for hosting my webapps, and apache for wordpress, but I am struggling to find a rough guide on how to configure both of these to play nicely together on the same host.
My aim is to have:
blog.foo.com -> The apache server hosting wordpress
foo.com -> The jetty webapp.
I want to avoid having port numbers showing in the address bar, so ideally I am thinking perhaps jetty or apache can check the URL, spot which one should respond, and forward the request appropriately. Since i'm new to this area, I'm struggling to find a guide (or rather.. know which guides to even look for) to achieve this.
My questions are:
Is this possible?
Roughly how does one go about setting this up - is there a specific part of either of the documentations which will help me? Which one of these should be listening on port 80 (assuming they can't both be on that port)?
I am inheriting a project involving a Java web app whose backend is powered by an Apache httpd/Tomcat combo. The web server is being used to serve back JS, static content, and to perform general load balancing, and Tomcat is serving back JSPs via a single WAR file.
I will be receiving access to the code base later on today or tomorrow, but wanted to try and do some research ahead of time.
My question can be summed up as: how do these two work together?
Who first receives HTTP requests?
How does httpd know when to forward JSP requests on to Tomcat, or to just respond to a request itself?
How does httpd "pass" the request to, and "receive" the response from, Tomcat? Does it just "copy-n-paste" the request/response to a port Tomcat is listening on? Is there some sort of OS-level interprocess communication going on? Etc.
These are just general questions about how the technologies collaborate with each other. Thanks in advance!
Who first receives HTTP requests?
Apache, almost certainly. There could be admin processes that talk directly to Tomcat, though.
How does httpd know when to forward JSP requests on to Tomcat, or to just respond to a request itself?
From its configuration. The specifics will vary. It might, for instance, be using mod_jk or mod_jk2, in which case you'll find JkMount directives in the config files, e.g.:
JkMount /*.jsp ajp13_worker
...which tells it to pass on requests at the root of the site for files matching *.jsp to the ajp13_worker, which is defined in the workers.properties file.
Or it could be set up in a simple HTTP reverse-proxy arrangement. Or something else.
How does httpd "pass" the request to, and "receive" the response from, Tomcat?
It depends on the configuration; it could be HTTP, it could be AJP, or it could be using some other module.
Does it just "copy-n-paste" the request/response to a port Tomcat is listening on?
Sort of. :-) See the reverse-proxy link above.
Is there some sort of OS-level interprocess communication going on?
Yes. AFAIK, it's all socket-based (rather than, say, shared memory stuff), which means (amongst other things) that Tomcat and Apache need not be running on the same machine.
I was taking a look at WebSockets last week and made a few thoughts on how to implement the server side with the Java Servlet API. I didn't spend too much time, but ran into the following problems during a few tests with Tomcat, which seem impossible to solve without patching the container or at least making container specific modifications to the HttpServletResponse implementation:
The WebSocket specification mandate a defined message in the 101 HTTP response. HttpServletResponse.setStatus(int code, String message) is deprecated without mentioning a usable replacement. After changing the default Tomcat configuration, I made Tomcat honor my message string, but since the method is deprecated, I'm not sure if this will work with other servlet containers.
The WebSocket specification require a specified order of the first few headers in the HTTP response to the connection upgrade request. The servlet API does not offer a method to specify the order of the response headers and Tomcat adds its own headers to the response, placing a few of them before any headers, which are added by the servlet implementation.
Since the content length of the response is not known when committing the header, Tomcat automatically switches to chunked transfer encoding for the response, which is incompatible with the WebSocket specification.
Am I missing something obvious, or is it really not possible to integrate WebSocket server endpoints in a servlet based web app?
There is an implementation in Jetty. We can hope that tomcat and jetty find a compatible API.
The Glassfish Atmosphere project will do what you want. There is a servlet you can define to do all the work.
jWebSocket claims to run as a Tomcat application. Unfortunately some files are missing in the binary distribution of jWebSocket. Some people are trying to recompile jWebSocket and get the necessary files, since the source code is available. All in all, jWebSocket does not seem as a reliable product.
Yes there is a very good one (open source and completely free): http://www.jWebSocket.org