I would like to call REST API using Spring & would like to know if there is any such implementation similar to what Jersey provides as shown below:
import javax.ws.rs.client.InvocationCallback;
public class FacebookService {
private final WebTarget target = ClientBuilder.newClient()
.target("http://graph.facebook.com/");
public Future<FacebookUser> userAsync(String user) {
return target
.path("/{user}")
.resolveTemplate("user", user)
.request()
.async()
.get(new InvocationCallback<FacebookUser>() {
#Override
public void completed(FacebookUser facebookUser) {
// on complete
}
#Override
public void failed(Throwable throwable) {
// on fail
}
});
}
}
For eg, here Jersey provides completed & failed method that can be used to find if API call has failed or successful.
how do we do it with spring?
Thanks!
Spring provides AsyncRestTemplate and it provides call back functionality. Here is the detail. It is supported from 4.x onwards.
http://docs.spring.io/autorepo/docs/spring/4.1.1.RELEASE/javadoc-api/org/springframework/web/client/AsyncRestTemplate.html
Related
I have recently upgraded an older application, based on Spring Boot, from version 1.5.9 to 2.2.6.
Unfortunately, after upgrading, the urls generated with HATEOAS are changed. Basically the context-path is missing from the Links now.
Example:
Before: https://domain.test.com/service/api/endpoint
Now: https://domain.test.com/service/endpoint
Right now I am using the following configs in application properties:
server.servlet.context-path: /api
server.forward-headers-strategy: FRAMEWORK
spring.data.rest.basePath: /api
(With none, the host is totally different(because of the x-forwarded-host. I have also tried with native, but same behavior)
I have also created a ForwardedHeaderFilter bean.
#Bean
public ForwardedHeaderFilter forwardedHeaderFilter() {
return new ForwardedHeaderFilter();
}
Is there anything I can do to bypass this issue? Am I doing something wrong ?
One alternative would be to adjust the api gateway, but this would be really complicated from a business process perspective so I would prefer a more technical approach.
Thank you !
As a temporary solution, until I have time to really take a deeper look, I have created a new Utility class, that takes care of adjusting the path:
public class LinkUtil {
private LinkUtil() {
}
#SneakyThrows
public static <T> Link linkTo(T methodOn) {
String rawPath = WebMvcLinkBuilder.linkTo(methodOn).toUri().getRawPath();
rawPath = StringUtils.remove(rawPath, "/service");
BasicLinkBuilder basicUri = BasicLinkBuilder.linkToCurrentMapping().slash("/api").slash(rawPath);
return new Link(basicUri.toString());
}
}
Where /api is the context-path.
Then I use it like this:
Link whateverLink = LinkUtil.linkTo(methodOn(WhateverClass.class).whateverMethod(null)).withRel("whatever-rel));
#LoolKovski's temporary solution relies on an existing ServletRequest because of #linkToCurrentMapping. Use the following code if you, too, need to eliminate that restriction:
public class LinkUtil {
private LinkUtil() {
}
#SneakyThrows
public static <T> Link linkTo(T methodOn) {
var originalLink = WebMvcLinkBuilder.linkTo(methodOn);
var rawPathWO = StringUtils.remove(originalLink.toUri().getRawPath(), "/service");
return originalLink.withHref("/api" + rawPathWO);
}
}
Actually, in my case the links are generated during one of the RestController beans' initialization, so my real code looks like the following code.
I don't need to cut-off some other path part before but only need to prepend a configured context path.
#RestController
public class ExampleController implements ServletContextAware {
#Override
public void setServletContext(ServletContext servletContext) {
final var executor = Executors.newSingleThreadExecutor();
executor.submit(() -> {
someRepository.getExamples().forEach((name, thing) -> {
Link withRel = linkTo(methodOn(ExampleController.class).getElement(null, name, null))
.withSelfRel();
withRel = withRel.withHref(servletContext.getContextPath() + withRel.toUri().getRawPath());
thing.add(withRel);
});
executor.shutdown();
});
}
#RequestMapping(path = "/{name}/", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public HttpEntity<Example> getElement(ServletWebRequest req, #PathVariable("name") String name, Principal principal) {
[...]
}
I need to validate request before different rpc methods being called with different validators.
So I implemented validators like
class BarRequestValidator {
public FooServiceError validate(BarRequest request) {
if (request.bar.length > 12) {
return FooServiceError.BAR_TOO_LONG;
} else {
return null;
}
}
}
and add a custom annotation before my rpc method
class FooService extends FooServiceGrpc.FooServiceImplBase {
#Validated(validator = BarRequestValidator.class)
public void bar(BarRequest request, StreamObserver<BarResponse> responseObserver) {
// Validator should be executed before this line, and returns error once validation fails.
assert(request.bar <= 12);
}
}
But I found that I can't find a way to get annotation information in gRPC ServerInterceptor. Is there any way to implement grpc request validation like this?
You can accomplish this without having the annotation at all, and just using a plain ServerInterceptor:
Server s = ServerBuilder.forPort(...)
.addService(ServerInterceptors.intercept(myService, myValidator))
...
private final class MyValidator implements ServerInterceptor {
ServerCall.Listener interceptCall(call, headers, next) {
ServerCall.Listener listener = next.startCall(call, headers);
if (call.getMethodDescriptor().getFullMethodName().equals("service/method")) {
listener = new SimpleForwardingServerCallListener(listener) {
#Override
void onMessage(request) {
validate(request);
}
}
}
return listener;
}
}
Note that I'm skipping most of the boilerplate here. When a request comes in, the interceptor gets it first and checks to see if its for the method it was expecting. If so, it does extra validation. In the generated code you can reference the existing MethodDescriptors rather than copying the name out like above.
We have used a Activity-BL-DAO-DB(sqlite) in my app while developing.
Due to change in requirement, we have to use REST service from the server alone. I have seen Retrofit for it. But I'm not sure how to use it in DAO classes instead of SQL queries.
We have looked into bus concepts which requires more rework. We wanted to make minimal changes to the code to incorporate this change.
If anything else needed,let me know.
Eg:
Following is the sample flow which will display list of technologies in the list.
Technology Activity OnCreate method:
techList=new ArrayList<Technology>();
techList=technologyBL.getAllTechnology(appId);
adapterTech=new TechnologyAdapter(this,new ArrayList<Technology> (techList));
listView.setAdapter(adapterTech);
Technology BL :
public List<Technology> getAllTechnology(String appId) {
techList=technologyDao.getAllTechnology(appId);
// some logic
return techList;
}
Technology DAO:
public List<Technology> getAllTechnology(String appId) {
//sql queries
return techList;
}
Technology Model:
class Technology{
String id,techName,techDescription;
//getters & setters
}
I have to replace sql queries with retrofit request. I have created the following retrofit class and interfaces:
RestClient Interface:
public interface IRestClient {
#GET("/apps/{id}/technologies")
void getTechnoloies(#Path("id") String id,Callback<List<Technology>> cb);
//Remaining methods
}
RestClient :
public class RestClient {
private static IRestClient REST_CLIENT;
public static final String BASE_URL = "http://16.180.48.236:22197/api";
Context context;
static {
setupRestClient();
}
private RestClient() {}
public static RestClient get() {
return REST_CLIENT;
}
private static void setupRestClient() {
RestAdapter restAdapter = new RestAdapter.Builder()
.setEndpoint(BASE_URL)
.setClient(getClient())
.setRequestInterceptor(new RequestInterceptor() {
//cache related things
})
.setLogLevel(RestAdapter.LogLevel.FULL)
.build();
REST_CLIENT = restAdapter.create(IAluWikiClient.class);
}
private static OkClient getClient(){
//cache related
}
}
I tried calling with both sync/async methods in DAO. For sync method,it was throwing some error related to main thread.For Async,it is crashing as request is done late.
Sync call in DAO:
techList=RestClient.get().getTecchnologies(id);
Async call in DAO:
RestClient.get().getTechnolgies(id,new CallBack<List<Technolgy>(){
#Override
public void success(List<Technology> technologies, Response response) {
techList=technologies;
}
#Override
public void failure(Retrofit error){}
});
You've got two options here.
The first is to create the Retrofit callback in the Activity:
RestClient.get().getTechnolgies(id,new CallBack<List<Technolgy>(){
#Override
public void success(List<Technology> technologies, Response response) {
ArrayList<Technology> techList = technologyBL.someLogic(technologies);
adapterTech=new TechnologyAdapter(this,techList);
listView.setAdapter(adapterTech);
}
#Override
public void failure(Retrofit error){}
});
Note that you will have to extract your //some logic part into a separate BL method.
The second option is to make the Retrofit API call return an RxJava Observable (which is integrated into Retrofit):
RestClient.get().getTechnolgies(id)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Action1<List<Technology>>() {
#Override
public void call(List<Technology> technologies) {
ArrayList<Technology> techList = technologyBL.someLogic(technologies);
adapterTech=new TechnologyAdapter(this,techList);
listView.setAdapter(adapterTech);
}
});
In this case, your RestClient interface is:
public interface IRestClient {
#GET("/apps/{id}/technologies")
Observable<List<Technology>> getTechnologies(#Path("id") String id);
//Remaining methods
}
You can read more about it in the "SYNCHRONOUS VS. ASYNCHRONOUS VS. OBSERVABLE" section of http://square.github.io/retrofit/. Also, see these two blogposts to get your head around RxJava and Observables:
In my experience, I have found it useful to create a Service which executes calls to the Retrofit API by using custom AsyncTask implementations. This paradigm keeps all your data model interactions in one place (the service) and gets all the API calls off the main thread.
DropWizard uses Jersey under the hood for REST. I am trying to figure out how to write a client for the RESTful endpoints my DropWizard app will expose.
For the sake of this example, let's say my DropWizard app has a CarResource, which exposes a few simple RESTful endpoints for CRUDding cars:
#Path("/cars")
public class CarResource extends Resource {
// CRUDs car instances to some database (DAO).
public CardDao carDao = new CarDao();
#POST
public Car createCar(String make, String model, String rgbColor) {
Car car = new Car(make, model, rgbColor);
carDao.saveCar(car);
return car;
}
#GET
#Path("/make/{make}")
public List<Car> getCarsByMake(String make) {
List<Car> cars = carDao.getCarsByMake(make);
return cars;
}
}
So I would imagine that a structured API client would be something like a CarServiceClient:
// Packaged up in a JAR library. Can be used by any Java executable to hit the Car Service
// endpoints.
public class CarServiceClient {
public HttpClient httpClient;
public Car createCar(String make, String model, String rgbColor) {
// Use 'httpClient' to make an HTTP POST to the /cars endpoint.
// Needs to deserialize JSON returned from server into a `Car` instance.
// But also needs to handle if the server threw a `WebApplicationException` or
// returned a NULL.
}
public List<Car> getCarsByMake(String make) {
// Use 'httpClient' to make an HTTP GET to the /cars/make/{make} endpoint.
// Needs to deserialize JSON returned from server into a list of `Car` instances.
// But also needs to handle if the server threw a `WebApplicationException` or
// returned a NULL.
}
}
But the only two official references to Drop Wizard clients I can find are totally contradictory to one another:
DropWizard recommended project structure - which claims I should put my client code in a car-client project under car.service.client package; but then...
DropWizard Client manual - which makes it seem like a "DropWizard Client" is meant for integrating my DropWizard app with other RESTful web services (thus acting as a middleman).
So I ask, what is the standard way of writing Java API clients for your DropWizard web services? Does DropWizard have a client-library I can utilize for this type of use case? Am I supposed to be implementing the client via some Jersey client API? Can someone add pseudo-code to my CarServiceClient so I can understand how this would work?
Here is a pattern you can use using the JAX-RS client.
To get the client:
javax.ws.rs.client.Client init(JerseyClientConfiguration config, Environment environment) {
return new JerseyClientBuilder(environment).using(config).build("my-client");
}
You can then make calls the following way:
javax.ws.rs.core.Response post = client
.target("http://...")
.request(MediaType.APPLICATION_JSON)
.header("key", value)
.accept(MediaType.APPLICATION_JSON)
.post(Entity.json(myObj));
Yes, what dropwizard-client provides is only to be used by the service itself, most likely to communicate other services. It doesn't provide anything for client applications directly.
It doesn't do much magic with HttpClients anyway. It simply configures the client according to the yml file, assigns the existing Jackson object mapper and validator to Jersey client, and I think reuses the thread pool of the application. You can check all that on https://github.com/dropwizard/dropwizard/blob/master/dropwizard-client/src/main/java/io/dropwizard/client/JerseyClientBuilder.java
I think I'd go about and structure my classes as you did using Jersey Client. Following is an abstract class I've been using for client services:
public abstract class HttpRemoteService {
private static final String AUTHORIZATION_HEADER = "Authorization";
private static final String TOKEN_PREFIX = "Bearer ";
private Client client;
protected HttpRemoteService(Client client) {
this.client = client;
}
protected abstract String getServiceUrl();
protected WebResource.Builder getSynchronousResource(String resourceUri) {
return client.resource(getServiceUrl() + resourceUri).type(MediaType.APPLICATION_JSON_TYPE);
}
protected WebResource.Builder getSynchronousResource(String resourceUri, String authToken) {
return getSynchronousResource(resourceUri).header(AUTHORIZATION_HEADER, TOKEN_PREFIX + authToken);
}
protected AsyncWebResource.Builder getAsynchronousResource(String resourceUri) {
return client.asyncResource(getServiceUrl() + resourceUri).type(MediaType.APPLICATION_JSON_TYPE);
}
protected AsyncWebResource.Builder getAsynchronousResource(String resourceUri, String authToken) {
return getAsynchronousResource(resourceUri).header(AUTHORIZATION_HEADER, TOKEN_PREFIX + authToken);
}
protected void isAlive() {
client.resource(getServiceUrl()).get(ClientResponse.class);
}
}
and here is how I make it concrete:
private class TestRemoteService extends HttpRemoteService {
protected TestRemoteService(Client client) {
super(client);
}
#Override
protected String getServiceUrl() {
return "http://localhost:8080";
}
public Future<TestDTO> get() {
return getAsynchronousResource("/get").get(TestDTO.class);
}
public void post(Object object) {
getSynchronousResource("/post").post(object);
}
public void unavailable() {
getSynchronousResource("/unavailable").get(Object.class);
}
public void authorize() {
getSynchronousResource("/authorize", "ma token").put();
}
}
if anyone is trying to use DW 0.8.2 when building a client, and you're getting the following error:
cannot access org.apache.http.config.Registry
class file for org.apache.http.config.Registry not found
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:858)
at org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:129)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
update your dropwizard-client in your pom.xml from 0.8.2 to 0.8.4 and you should be good. I believe a jetty sub-dependency was updated which fixed it.
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-client</artifactId>
<version>0.8.4</version>
<scope>compile</scope>
</dependency>
You can integrated with Spring Framework to implement
Please look at the code I posted below. FYI, this is from the Oracle website's websocket sample:
https://netbeans.org/kb/docs/javaee/maven-websocketapi.html
My question is, how does this work?! -- especially, the broadcastFigure function of MyWhiteboard. It is not a abstract function that is overridden and it is not "registered" with another class as in the traditional sense. The only way I see it is when the compiler sees the #OnMessage annotation, it goes and inserts the broadcastFigure call into the compiled code for when a new message is received. But before calling this function, it flows through the received data through the FigureDecoder class - based on this decoder being specified in the annotation #ServerEndpoint. Within broadcastFigure, when sendObject is called, the compiler inserts a reference to FigureEncoder - based on what's specified in the annotation #ServerEndpoint. Is this accurate?
If so, why did this implementation do things this way using annotations? Before looking at this, I would have expected there to be an abstract OnMessage function which needs to be overridden and explicit registration functions for Encoder and Decoder. Instead of such a "traditional" approach, why does the websocket implementation do it via annotations?
Thank you.
Mywhiteboard.java:
#ServerEndpoint(value = "/whiteboardendpoint", encoders = {FigureEncoder.class}, decoders = {FigureDecoder.class})
public class MyWhiteboard {
private static Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>());
#OnMessage
public void broadcastFigure(Figure figure, Session session) throws IOException, EncodeException {
System.out.println("broadcastFigure: " + figure);
for (Session peer : peers) {
if (!peer.equals(session)) {
peer.getBasicRemote().sendObject(figure);
}
}
}
#OnError
public void onError(Throwable t) {
}
#OnClose
public void onClose(Session peer) {
peers.remove(peer);
}
#OnOpen
public void onOpen(Session peer) {
peers.add(peer);
}
}
FigureEncoder.java
public class FigureEncoder implements Encoder.Text<Figure> {
#Override
public String encode(Figure figure) throws EncodeException {
return figure.getJson().toString();
}
#Override
public void init(EndpointConfig config) {
System.out.println("init");
}
#Override
public void destroy() {
System.out.println("destroy");
}
}
FigureDecoder.java:
public class FigureDecoder implements Decoder.Text<Figure> {
#Override
public Figure decode(String string) throws DecodeException {
JsonObject jsonObject = Json.createReader(new StringReader(string)).readObject();
return new Figure(jsonObject);
}
#Override
public boolean willDecode(String string) {
try {
Json.createReader(new StringReader(string)).readObject();
return true;
} catch (JsonException ex) {
ex.printStackTrace();
return false;
}
}
#Override
public void init(EndpointConfig config) {
System.out.println("init");
}
#Override
public void destroy() {
System.out.println("destroy");
}
}
Annotations have their advantages and disadvantages, and there is a lot to say about choosing to create an annotation based API versus a (how you say) "traditional" API using interfaces. I won't go into that since you'll find plenty of wars online.
Used correctly, annotations provide better information about what a class/method's responsibility is. Many prefer annotations and as such they have become a trend and they are used everywhere.
With that out of the way, let's get back to your question:
Why did this implementation do things this way using annotations? Before looking at this, I would have expected there to be an abstract OnMessage function which needs to be overridden and explicit registration functions for Encoder and Decoder. Instead of such a "traditional" approach, why does the websocket implementation do it via annotations?
Actually they don't. Annotation is just a provided way of using the API. If you don't like it then you can do it the old way. Here is from the JSR-356 spec:
There are two main means by which an endpoint can be created. The first means is to implement certain of
the API classes from the Java WebSocket API with the required behavior to handle the endpoint lifecycle,
consume and send messages, publish itself, or connect to a peer. Often, this specification will refer to this
kind of endpoint as a programmatic endpoint. The second means is to decorate a Plain Old Java Object
(POJO) with certain of the annotations from the Java WebSocket API. The implementation then takes these
annotated classes and creates the appropriate objects at runtime to deploy the POJO as a websocket endpoint.
Often, this specification will refer to this kind of endpoint as an
annotated endpoint.
Again, people prefer using annotations and that's what you'll find most of tutorials using, but you can do without them if you want it bad enough.