Jetty 9 supports both it's own Jetty Websocket API as well as the standard JSR 356 API, for what I assume are historical reasons (Jetty's API precedes the final JSR 356).
I've looked over the basic documentation of both APIs, as well as some examples. Both APIs seem fairly complete and rather similar. However, I need to choose one over the other for a new project I'm writing, and I'd like to avoid using an API that might be deprecated in the future or might turn out to be less feature-rich.
So are there any important differences between the two except for the obvious fact that one is standardized?
Implementor of both on Jetty here :)
The Jetty WebSocket API came first, and the JSR-356 API is built on top of it.
The JSR-356 API does a few things that the Jetty WebSocket API does not, such as
Decoder's for automatic Bin/Text to Object conversion
Encoder's for automatic Object to Bin/Text conversion
Path Param handling (aka automatic URI Template to method param mapping)
However, the Jetty WebSocket API can do things the JSR-356 API cannot.
WebSocketCreator logic for arbitrary creation of the WebSocket endpoint, with access to the HttpServletRequest
Better control of timeouts
Finer buffer / memory configurations
You can manage WebSocket Extensions
Supports Reg-ex based Path mappings for endpoints
Access to raw Frame events
WebSocket client supports HTTP proxies (JSR-356 standalone client has no configuration options for proxies)
WebSocket client supports better connect logic with timeouts
WebSocket client supports SSL/TLS (JSR-356 standalone client has no configuration options for SSL/TLS)
Access to the both InetAddress endpoint information from the active WebSocket Session object
Access to UpgradeRequest from active WebSocket Session object
Better support for stateless endpoints
Read events support suspend/resume logic to allow application some basic tcp backpressure / flow control
Filter based or Servlet based configuration (the JSR-356 approach requires upgrade to occur before all other servlet and filter processing)
Hope this helps, if you want more details, please use the jetty-users mailing list, as this sort of question is really inappropriate for stackoverflow.
Related
I am reading https://www.mnot.net/blog/2019/10/13/h2_api_multiplexing and wondering what it means for our HTTP API implemented in JAX-RS 2.1; especially this excerpt:
Your server implementation will also need to be carefully considered to exploit this kind of request pattern
I've read that Java 9 introduced client APIs for HTTP/2, and that Servlet 4.0 introduced support for HTTP/2 server features like Server Push.
What I'm wondering is if the JDK (or below) are automatically handling the initiation of the HTTP/2 connection and/or the multiplexing aspect of the protocol, or is there something I need to do in our server config to ensure HTTP/2 (and especially the multiplexing feature) is properly supported. Or maybe its server-implementation-specific?
My application uses a custom binary protocol which is implemented with Netty. Recently I changed it to use Netty's websocket implementation. It works quite well.
My application also has a Jetty web server included and it offers websockets, too. Now I want to reduce the opened ports my server needs and handle all http traffic with one port.
I see three options:
Use either Netty or Jetty to proxy the traffic which belongs to the other implementation.
Reimplement the custom protocol on Jetty without the use of Netty's channels and piplines.
Create a custom implementation of Netty's channels that sends and receives it's data not over a socket but the methods Jetty's WebSocketListener offers.
Since Netty provides such a good api for writing binary protocols and a proxy sounds like extra problems to me I tend using the third approach. It shouldn't be too difficult to implement even though I don't know how to do it, yet.
Any thoughts what would be the best option and how I should implement it?
I was reading many articles to find the best Rest Client for java application, I found finally using Jersey with Apache HTTP client 4.5 is great but in a lot of articles I found that now Retrofit is the best (I didn't mention
Volley because in my case I don't need that the API supports caching.
Does Retrofit is better for a java client application. or is it just better for android? and why I didn't find this comparison before .. they cannot be compared?
Can I have a comparison between their performance, connection pooling, on which layer do they work, compression of the requests and responses, Timeout, de-serialization?
HTTP3 does not support connection pooling, is that why retrofit is used usually for android ?? so It will not be practical for a normal java application where it will cause connection leak.
My target is to find the best Rest API client with a high performance, and support high number of connections.
Thank you in advance
You're mixing different things together. To clear things up up-front:
Retrofit is a client library to interact with REST APIs. As such it offers the same abstraction level as Jersey, RESTeasy or Spring's RestTemplate. They all allow to interact with REST APIs using a type-safe API without having to deal with low level aspects like serialization, request building and response handling.
Each of those libraries uses a HTTP client underneath to actually talk to an HTTP server. Examples are Apache HTTP client that you mentioned, OkHttp or the plain-old HttpUrlConnection shipping with the JDK.
You can usually mix and match the different REST client libraries and HTTP clients except for Retrofit because Retrofit has a hard dependency on OkHttp since version 2 (with Retrofit 1.x you can use Apache HTTP Client, HttpUrlConnection or OkHttp).
Back to the actual question: What to pick when.
Android: It's easy here because JAX-RS, the API/technology behind Jersey and RESTeasy isn't supported on Android. Hence Retrofit is more or less your only option except maybe Volley if you don't want to talk HTTP directly. Spring isn't available either and Spring Android is abandoned.
JRE/JDK: Here you have the full choice of options.
Retrofit might be nice if you want a quick and easy solution to implement a third-party API for which no SDK is available or JAX-RS interfaces.
Spring's RestTemplate is a good choice if you're using Spring and there are no JAX-RS interfaces or you don't want to buy into JAX-RS, i.e. also using it on the server-side.
JAX-RS (Jersey, RESTeasy, …) is a good choice if you want to share interface definitions between client and servers or if you're all-in on JavaEE anyway.
Regarding performance: The main drivers here is the time spent on doing HTTP and (de)serialization. Because (de)serialization is performed by specialized libraries like Jackson or protobuf and all use the same (or you could at least make them to) there shouldn't be any meaningful difference.
It took a while to find, however I have found the perfect REST client library that makes our development declarative and easy. We can use this as the standard when developing new REST implementations or APIs.
It is called Feign, developed by the Netflix team and made to work with Spring Cloud Netflix. More details here on the project’s site.
Some features include:
- Integration with Jackson, Gson and other Encoders/Decoders
- Using OkHttp for network communication, a proven HTTP library
- Binding with SLF4J for logging features
- Interface-based implementation, minimal development. Below is a sample client:
#FeignClient("stores")
public interface StoreClient
{
#RequestMapping(method = RequestMethod.GET, value = "/stores")
List<Store> getStores();
#RequestMapping(method = RequestMethod.POST, value = "/stores/{storeId}", consumes = "application/json")
Store update(#PathVariable("storeId") Long storeId, Store store);
}
And after #aha 's answer as quoted below:
JRE/JDK: Here you have the full choice of options.
Retrofit might be nice if you want a quick and easy solution to
implement a third-party API for which no SDK is available or JAX-RS
interfaces.
Spring's RestTemplate is a good choice if you're using
Spring and there are no JAX-RS interfaces or you don't want to buy
into JAX-RS, i.e. also using it on the server-side.
JAX-RS (Jersey,
RESTeasy, …) is a good choice if you want to share interface
definitions between client and servers or if you're all-in on JavaEE
anyway.
Feign works like retrofit and JAX-RS together: easy solution and can share interface definitions between client and servers and can use JAX-RS interfaces
I have a Jersey based server that I want to secure with OAuth 2.0. There are two paths that I've seen as common:
Oltu - Is compatible with Jersey and seems to be supported, although not as well as Spring Security. This 2012 question seems to suggest this is the way to go, but I want confirmation on a 2016 context so I son't implement something not as well supported anymore.
Spring Security - It seems to be very popular, but this path implies changing the server into a Spring based MVC. I don't know if that is something recommendable based on the benefits of using something as widely supported as Spring and the cost of the refactoring.
With support I mean a project that is in continous development, well established community with tutorials, materials and some libraries for clients (web, mobile, server) already available.
Which one is a stronger option? Is there another option or options?
In any case. Is there a good reference material or tutorial to start implementing this?
UPDATE
After few hours of reading and understanding about both the OAuth Providers I had mentioned, I feel Apache Oltu's documentation did not guide me much as there are key components that aren't documented yet, but an example gave me a better picture on how Oltu must be implemented. On the other hand, going through Spring Security's material I got to know that it can still be built on a non-Spring MVC based java project. But there is a limited exposure of implementations/tutorials on Spring Security on a non-Spring based project.
Another approach:
I came up with an architecture that might be more stable and would not care about the implementation details of the inner server(the one already implemented using Jersey). Having a server that is dedicated for security purpose (authorizing, authenticating, storing tokens in its own database, etc) in the middle that acts like a gateway between the outside world and the inner server. It essentially acts a relay and routes the calls, back and forth and ensures that the client knows nothing about the inner server and both the entities communicate with the security server only. I feel this would be the path to move forward as
Replacing with another security provider just means plugging out the security server implemetation and adding the new one.
The security server cares nothing about the inner server implementation and the calls would still follow the RESTful standards.
I appreciate your suggestions or feedbacks on this approach.
Apache Oltu supports OpenID Connect but its architecture is bad. For example, OpenIdConnectResponse should not be a descendant of OAuthAccessTokenResponse because an OpenID Connect response does not always contain an access token. In addition, the library weirdly contains a GitHub-specific class, GitHubTokenResponse.
Spring Security is famous, but I'm afraid it will never be able to support OpenID Connect. See Issue 619 about the big hurdle for OpenID Connect support.
java-oauth-server and java-resource-server are good examples of Jersey + OAuth 2.0, but they use a commercial backend service, Authlete. (I'm the author of them.)
OpenAM, MITREid Connect, Gluu, Connect2id, and other OAuth 2.0 + OpenID Connect solutions are listed in Libraries, Products, and Tools page of OpenID Foundation.
**UPDATE** for the update of the question
RFC 6749 (The OAuth 2.0 Authorization Framework) distinguishes an authorization server from a resource server. In short, an authorization server is a server that issues an access token, and a resource server is a server that responds to requests which come along with an access token.
For a resource server, API Gateway is one of the recent design patterns. Amazon, CA Technologies, IBM, Oracle and other companies provide API Gateway solutions. API Gateway architecture may be close to your idea. Some API Gateway solutions verify access tokens in their own ways (because the solutions issue access tokens by themselves) and other solutions just delegate access token verification to an external server (because the solutions don't have a mechanism to issue access tokens). For example, Amazon API Gateway is an example that delegates access token verification to an external server, which Amazon has named custom authorizer. See the following for further information about custom authorizer.
Introducing custom authorizers in Amazon API Gateway (AWS Blog)
Enable Amazon API Gateway Custom Authorization (AWS Document)
Amazon API Gateway Custom Authorizer + OAuth (Authlete article)
If an authorization server provides an introspection API (such as RFC 7662) that you can use query information about an access token, your resource server implementation may be able to replace (plug-out and add) an authorization server to refer to comparatively easily.
For an athorization server, gateway-style solutions are rare. It's because such a solution must expose all the functionalities required to implement an authorization server as Web APIs. Authlete is such a solution but I don't know others.
I think, it's far simplier to use the oauth connectors that are implemented inside jersey itself!
Have you considered using jersey own OAuth (already linked inside jersey) server / client ?
https://eclipse-ee4j.github.io/jersey.github.io/documentation/latest/security.html#d0e13146
Please take a look to :
16.3.2. OAuth 2 Support
hope helped. :)
I'm looking at the Java API for MarkLogic, which I assume leverages the HTTP protocol for database connections. Is it possible to establish a connection over TCP? If not with the Java API, is it possible to interrogate the database by any means over TCP?
Our current architecture is based on a Microservice architecture concept, and includes a number of stages in any given process flow through the system, including Queueing, Message-brokering, etc. Given the number of steps, I'd like to optimise traffic-speed insofar as possible by leveraging TCP connections.
The Java API uses the REST Application services on MarkLogic which is fully HTTP 1.1 compliant and TCP/IP.
Not sure what else you are asking for.
For programs written in Java the Java API is the recommended API for most uses
http://developer.marklogic.com/products/java
You can also use the REST services directly, but the Java API adds a lot of Best Practice and exposes a higher level of abstraction to make coding simplier.
You can use the REST API from any application that can do HTTP
http://docs.marklogic.com/guide/rest-dev/intro
But its a bit more work as you have to construct your HTTP messages directly.
You can also create your own HTTP interface and access it through TCP/IP (HTTP) by making an HTTP App Server (written in XQuery).
Finally if you want very low level but effecient access, using Java or .NET you can use the XCC interface which is more tedious to use but provides a lower level feature for advanced users. This requires the Java or .NET library as the protocol is not documented.
https://developer.marklogic.com/products/xcc
What language are you going to be using and what kinds of operations ? That can help focus on which API is best for you.
-David Lee
HTTP is built on TCP. So by definition all HTTP connections are over TCP.
If you'd like a proprietary protocol instead of HTTP, one option is to forget the fact you learned that the Java API uses HTTP and imagine it uses TCP directly. :)
If you really want a proprietary protocol over TCP, you can use the XDBC protocol in combination with the XCC client. By default XDBC uses a wire protocol on TCP that isn't published.