Akka Http Request and Response pattern - java

I have a requirement where a client calls a post REST endpoint created via akka http. As soon as the request is in the post method, I need to pass the post object to the stream (consisting of source, several flows and sink, etc) and get back the response from the sink so that I can return the response back to the client.
I have been going through some articles and have seen the below code but have a concern that I don't want to materialize the stream for every request. I only want to materialize a stream and keep on passing the elements to that stream.
Below is the high level of what I saw:
val route: Route =
path("dummy path") { p =>
get {
(extract(_.request) & extractMaterializer) { (req, mat) ⇒
**Source.single(req).runWith(sink)(mat)**
complete {
s"<h1>Say hello to akka-http. p=$p</h1>"
}
}
}
}
I was thinking of creating an actor and passing the object to that actor. I can create a source from Source.actorRef and connect several flows with this source. But I am not sure, how to get back the response from the sink. Something like:
val actor: ActorRef = some actor
Source.actorRef(actor).via(flows).to(Sink).run() --> materialized stream
val route: akka.http.scaladsl.server.Route =
path("post" / Segment) { p =>
post {
(extract(_.request) & extractMaterializer) { (req, mat) ⇒
response = actor.ask(message) --> get back the response
complete {
response
}
}
}
}
Or, is there anything else that I can incorporate in my use-case.

I guess what you want is to make the processing of request flow through a stream [materialized only once] and send the response back to the user from the stream. May be a queue source and an Actor in between can do the job
import java.util.concurrent.TimeUnit
import akka.actor.{Actor, ActorRef, ActorSystem, Props}
import akka.http.scaladsl.Http
import akka.http.scaladsl.server.Directives.{
get,
onSuccess,
pathEnd,
pathPrefix
}
import akka.pattern.ask
import akka.stream.scaladsl.{Keep, Sink, Source, SourceQueueWithComplete}
import akka.stream.{ActorMaterializer, OverflowStrategy, QueueOfferResult}
import akka.util.Timeout
import akka.http.scaladsl.server.directives.RouteDirectives.complete
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContext
object TestApp2 extends App {
implicit val actorSystem = ActorSystem("test-system")
implicit val mat = ActorMaterializer()
implicit val ec = mat.executionContext
val streamSource = Source
.queue[(Message, ActorRef)](100, OverflowStrategy.dropNew)
.map { p =>
//do anything here
println("I am processing request")
("It works", p._2)
}
.toMat(Sink.foreach { resp =>
resp._2 ! resp._1
})(Keep.left)
.run()
implicit val timeout = Timeout(
10000,
TimeUnit.MILLISECONDS
)
val internalActor =
actorSystem.actorOf(Props(new InternalActor(streamSource)))
Http(actorSystem)
.bindAndHandle(
getRoutes(internalActor),
"0.0.0.0",
8080
)
def getRoutes(
internalActor: ActorRef
)(implicit mat: ActorMaterializer, ec: ExecutionContext, timeout: Timeout) = {
pathPrefix("healthcheck") {
get {
pathEnd {
val responseReturned = internalActor ? Message()
onSuccess(responseReturned) {
case response: String =>
complete(response)
case _ => complete("error")
}
}
}
}
}
}
case class Message()
class InternalActor(streamSource: SourceQueueWithComplete[(Message, ActorRef)])(
implicit ec: ExecutionContext
) extends Actor {
override def receive: Receive = {
case m: Message =>
val senderRef = sender()
streamSource.offer((m, senderRef)).map {
case QueueOfferResult.Enqueued => // do nothing for success
case QueueOfferResult.Dropped => senderRef ! "error" // return error in case of backpressure
case QueueOfferResult.Failure(ex) => senderRef ! "error" // return error
case QueueOfferResult.QueueClosed => senderRef ! "error" // return error
}
}
}
curl 'http://localhost:8080/healthcheck'
It works

Related

ExpressJS queue requests

I am looking to queue requests to ExpressJs so that only one request is processed on an endpoint at a time. I have found some examples of this: ExpressJS backend put requests into a queue
It seems though they require a separate function for each endpoint. I am trying to create one function that allows me to pass a queue name, and then stack items in the specified queues.
It is an API listening for requests, and upon receiving one will then execute a request to another online API before relaying the result back to the user through the original express endpoint. Ultimately I will then look to add some very basic caching for each endpoint, just to store a short JSON string for 3 seconds before expiring. That way it returns the cached string within the 3 second limit rather than fetch the data again from online.
Here is as far as I got, I would be curious to hear if there are better ways:
//UI request -> check cache -> return response || call request then return response
// Queue items on endpoint
class QueueUnique {
func;
q;
requestCache = [];
constructor(func) {
this.q = Promise.resolve();
this.func = func;
}
add(request) {
// Fetch all cached items related to the current endpoint queue
const cachedItem = this.requestCache.find(
(itm) => itm.queueName === request.queueName
);
// If the current request is within X seconds of the last successful requesst, return the cache
// otherwise make a new request
if (cachedItem && new Date().getTime() - cachedItem.runtime > 3000) {
console.log(
"Cache is over 3 seconds old. Doing new request. Queue name: " +
request.queueName
// no cahe, forward request:
//seperate this in to function
//res.sendResponse = res.send
// res.send = (body) => {
// request.body = body
// this.updateCache(request);
// res.sendResponse(body)
//}
//next()
);
this.updateCache(request);
} else if (cachedItem) {
console.log("Valid cache, return cache");
// res.send(request.body)
this.updateCache(request);
} else {
console.log("no cache");
//continue as normal as if no cache
// no cahe, forward request: Same as first run
this.addToCache(request);
}
// Do I need to use await before setting datetime?
//then cache
// Set the current time as a value in the item Array
request.runtime = new Date().getTime();
const queuedFunc = this.queue(request);
queuedFunc();
}
addToCache(request) {
// Add the new item to the permanent cache
this.requestCache.push(request);
}
updateCache(request) {
// Update the permanent request cache entry
const arrayIndex = this.requestCache.findIndex(
(itm) => itm.queueName === request.queueName
);
this.requestCache[arrayIndex] = request;
}
queue(item) {
return () => {
this.q = this.q
.then(() => this.func(item))
.catch((err) => {
console.log(err);
});
return this.q;
};
}
}
const response = (item) => {
return new Promise((resolve) => {
setTimeout(() => {
console.log("say", item.payload);
resolve();
}, item.delay);
});
};
const queue = [];
function test(bar, payload) {
if (!queue[bar]) {
queue[bar] = new QueueUnique(response);
}
queue[bar].add(payload);
console.log(queue);
return queue;
}
test("te", {
queueName: "ping",
payload: "one",
delay: 3000,
});
test("te", {
queueName: "ping",
payload: "one",
delay: 3000,
});
test("te", {
queueName: "ping",
payload: "one",
delay: 3000,
});
test("te2", {
queueName: "ping",
payload: "two",
delay: 1000,
});
test("te2", {
queueName: "ping",
payload: "two",
delay: 1000,
});
Here is what I have for now. Would be great to hear about improvements or issues. It is converted to Typescript as it is what I ultimately will be using.
import queueCache from './middleware/queueCache'
...
app.locals.cacheTimeout = 3000
app.get('/test', queueCache, (_req, res) => {
res.json({ test: 'test' })
})
app.get('/test1', queueCache, (_req, res) => {
res.json({ test1: 'test1' })
})
queueCache.ts:
import { Request, Response, NextFunction } from 'express'
interface requestData {
queueName?: string
cachedData?: string
runtime?: number
}
const cachedItemList = {} as Array<requestData>
const queue = [] as Array<QueueUnique>
const requestCache = [] as Array<requestData>
// Class to queue items and cache results
class QueueUnique {
func
q: Promise<unknown>
constructor(
func: (req: Request, res: Response, next: NextFunction) => Promise<unknown>
) {
this.func = func
this.q = Promise.resolve()
}
add(req: Request, res: Response, next: NextFunction) {
// Check if the item is already cached
if (checkCache(req, res)) {
return
}
// If not cached, add to queue
const queuedFunc = this.queue(req, res, next)
queuedFunc()
}
queue(req: Request, res: Response, next: NextFunction) {
return () => {
this.q = this.q
.then(() => this.func(req, res, next))
.catch((err) => {
console.log(err)
})
return this.q
}
}
}
const response = (req: Request, res: Response, next: NextFunction) => {
// Do another check to see if item just finished in queue created a useful cache
return new Promise((resolve) => {
if (checkCache(req, res)) {
resolve(true)
return
}
setTimeout(() => {
if (cachedItemList[0].queueName) {
// Got this far, and cache exists so it must be older than set time, starting new request.
// Return response to user
res.sendResponse = res.json
res.json = (body) => {
res.sendResponse(body)
// Find the current item in the request cache
const arrayIndex = requestCache?.findIndex(
(itm) => itm.queueName === req.url
)
// Set the time that the request was stored
requestCache[arrayIndex].runtime = new Date().getTime()
// Store the body of the response in the cache
requestCache[arrayIndex].cachedData = body
return res
}
} else {
// There was no cache
// Return the response to the caller
res.sendResponse = res.json
res.json = (body) => {
res.sendResponse(body)
// Only use cache on GET requests. When not GET, this middleware only acts as a queue.
if (req.method === 'GET') {
// Check if it is already in cache to avoid duplicates
// Overcomes an error: https://github.com/expressjs/express/issues/4826
const arrayIndex = requestCache?.findIndex(
(itm) => itm.queueName === req.url
)
if (arrayIndex === -1) {
requestCache.push({
cachedData: body, // Add the new item to the permanent cache
queueName: req.url, // Add the request URL to the item for later reference
runtime: new Date().getTime() // Add the time the request was made
})
}
}
return res
}
}
next()
resolve(true)
}, 4000)
})
}
function checkCache(req: Request, res: Response) {
// Fetch all cached items related to the current endpoint queue, which is named after the endpoint url
cachedItemList[0] =
requestCache.find((itm) => itm.queueName === req.url) || {}
// If the current request is within X seconds of the last successful requesst, return the cached version
if (
cachedItemList[0].runtime &&
new Date().getTime() - cachedItemList[0].runtime <
req.app.locals.cacheTimeout
) {
// Return the cached item to the user
res.json(cachedItemList[0].cachedData)
return true
} else {
return false
}
}
// Create multipule queues, one for each endpoint
function sortQueues(req: Request, res: Response, next: NextFunction) {
// Use the endpoint name to create a queue within the queues array
if (!queue[req.route.path]) {
queue[req.route.path] = new QueueUnique(response)
}
queue[req.route.path].add(req, res, next)
}
export default sortQueues

RetryPolicy does not work with coroutines

I made a simple gRPC server in Kotlin with coroutines and a client with Java. In the cliente I enabled and configured a retry policy, but it does was not work. I speend a lot of time to find a solution, belivied that my client was broken, but the problem it was in the server. I will show you the code.
This is my proto file:
syntax = "proto3";
option java_multiple_files = true;
option java_package = "br.com.will.protoclasses";
option java_outer_classname = "NotificationProto";
package notification;
service Notification {
rpc SendPush (SendPushNotificationRequest) returns (SendPushNotificationResponse);
}
message SendPushNotificationRequest {
string title = 1;
string message = 2;
string customer_id = 3;
}
message SendPushNotificationResponse {
string message = 1;
}
This is the client:
open class NotificationClient(private val channel: ManagedChannel) {
private val stub: NotificationGrpcKt.NotificationCoroutineStub =
NotificationGrpcKt.NotificationCoroutineStub(channel)
suspend fun send() {
val request =
SendPushNotificationRequest.newBuilder().setCustomerId(UUID.randomUUID().toString()).setMessage("test")
.setTitle("test").build()
val response = stub.sendPush(request)
println("Received: ${response.message}")
}
}
suspend fun main(args: Array<String>) {
val port = System.getenv("PORT")?.toInt() ?: 50051
val retryPolicy: MutableMap<String, Any> = HashMap()
retryPolicy["maxAttempts"] = 5.0
retryPolicy["initialBackoff"] = "10s"
retryPolicy["maxBackoff"] = "30s"
retryPolicy["backoffMultiplier"] = 2.0
retryPolicy["retryableStatusCodes"] = listOf<Any>("INTERNAL")
val methodConfig: MutableMap<String, Any> = HashMap()
val name: MutableMap<String, Any> = HashMap()
name["service"] = "notification.Notification"
name["method"] = "SendPush"
methodConfig["name"] = listOf<Any>(name)
methodConfig["retryPolicy"] = retryPolicy
val serviceConfig: MutableMap<String, Any> = HashMap()
serviceConfig["methodConfig"] = listOf<Any>(methodConfig)
print(serviceConfig)
val channel = ManagedChannelBuilder.forAddress("localhost", port)
.usePlaintext()
.defaultServiceConfig(serviceConfig)
.enableRetry()
.build()
val client = NotificationClient(channel)
client.send()
}
This is a part of my gRPC service, where i was testing the retry policy (the retry policy on client does not work with this implementation):
override suspend fun sendPush(request: SendPushNotificationRequest): SendPushNotificationResponse {
val count: Int = retryCounter.incrementAndGet()
log.info("Received a call on method sendPushNotification with payload -> $request")
if (random.nextFloat() < UNAVAILABLE_PERCENTAGE) {
log.info("Returning stubbed INTERNAL error. count: $count")
throw Status.INTERNAL.withDescription("error").asRuntimeException()
}
log.info("Returning successful Hello response, count: $count")
return SendPushNotificationResponse.newBuilder().setMessage("success").build()
}
Another implementation, but now using StreamObserver (This implementation works fine):
override fun sendPush(
request: SendPushNotificationRequest?,
responseObserver: StreamObserver<SendPushNotificationResponse>?
) {
log.info("Received a call on method sendPushNotification with payload -> $request")
val count: Int = retryCounter.incrementAndGet()
if (random.nextFloat() < UNAVAILABLE_PERCENTAGE) {
log.info("Returning stubbed UNAVAILABLE error. count: $count")
responseObserver!!.onError(
Status.UNAVAILABLE.withDescription("error").asRuntimeException()
)
} else {
log.info("Returning successful Hello response, count: $count")
responseObserver!!.onNext(SendPushNotificationResponse.newBuilder().setMessage("success").build())
return responseObserver.onCompleted()
}
}
The question is, whats is wrong? Can someone help me?
Does this code is generated by gRPC:
sendPush(request: SendPushNotificationRequest): SendPushNotificationResponse
gRPC depends on StreamObserver to send response to client after call responseObserver.onCompleted() or responseObserver.onError, please make sure your code could be work correctly.

angular, Java : HTTP request not going to server, while URL is valid

I am working in an application : Java Backend and Angular frontend. I am using angular Fromly, data is coming to service, but from the service it is not going to server.
lets share the code snipts:
Service Code:
export class RecommendationRequestService {
readonly ROOT_URL = environment.apiUrl + '/am/v1/recommendation-requests';
constructor(private http: HttpClient, private configService: RecommenderConfigService) {
} ​
​
​updateData(interviewStatus: InterviewStatusRecommendation): Observable<any> {
​console.log(interviewStatus);
​return this.http.put<any>(this.ROOT_URL, interviewStatus);
​}
​}
This line is printing intended data set : console.log(interviewStatus);
The server is running.
The code from where the service is being called :
onSubmit() {
this.model.recommendationRequest.agentInitiationId = this.agentInitiationId;
const subs = this.service.updateData(this.model).subscribe(response => {
console.log('------' + response);
if (response === 'OK') {
this.notify.success('Request Recommendation Update success.');
} else {
this.notify.error('Request Recommendation Update fail.');
}
},
err => {
if (err.error.hasOwnProperty('code') && err.error.code === 1000) {
this.notify.error(CommonEnum.VALIDATION_ERROR);
}
});
subs.unsubscribe();
}
console.log('------' + response); this line should print at least -----, But nothing.
I have checked the network monitor from the browser, no call is going.
What might be the possible issue, any thing from fromly?
You are doing it incorrect as Aldin Bradaric also updated in the comment, as soon as you make the call on the very next moment you are unsubscribing it. This is what you should do :
public subs: [] = [];
onSubmit() {
this.model.recommendationRequest.agentInitiationId = this.agentInitiationId;
const subs = this.service.updateData(this.model).subscribe(response => {
console.log('------' + response);
if (response === 'OK') {
this.notify.success('Request Recommendation Update success.');
} else {
this.notify.error('Request Recommendation Update fail.');
}
},
err => {
if (err.error.hasOwnProperty('code') && err.error.code === 1000) {
this.notify.error(CommonEnum.VALIDATION_ERROR);
}
});
//subs.unsubscribe(); // remove it and add it to the lifecycle hooks
this.subs.push(subs);
}
ngOnDestroy() {
// create an array of subscription
this.subs.forEach(sub => sub.unsubscribe() )
}

does retrofit interceptors proceed(request) make real request?

fun getTimeOutInterceptor(): Interceptor {
return Interceptor {
val request: Request = it.request()
val response = it.proceed(request)
try {
val content: String? = response.body()?.string()
response.newBuilder().body(ResponseBody.create(response.body()?.contentType(), content)).build()
} catch (exception: IOException) {
// Toast.makeText( BaseActivity.baseContext , "Time Out :)" , Toast.LENGTH_LONG).show()
Log.d("RetrofitClientInstance", "TimeOutFRomout")
}
response
}
}
1 .I do not understand the following
proceed(request) -> does not call HTTP server ,and send request
response.newBuilder().body(ResponseBody.create(response.body()?.contentType(), content)).build() -> why he rebuild
2 does multiple proceed(request) make it slow
3 how this will handle timeout

How to end chained http requests in RxJava Vert.x?

How to end chained requests in Rx Vert.X ?
HttpClient client = Vertx.vertx().createHttpClient();
HttpClientRequest request = client.request(HttpMethod.POST,
"someURL")
.putHeader("content-type", "application/x-www-form-urlencoded")
.putHeader("content-length", Integer.toString(jsonData.length())).write(jsonData);
request.toObservable().
//flatmap HttpClientResponse -> Observable<Buffer>
flatMap(httpClientResponse -> { //something
return httpClientResponse.toObservable();
}).
map(buffer -> {return buffer.toString()}).
//flatmap data -> Observable<HttpClientResponse>
flatMap(postData -> client.request(HttpMethod.POST,
someURL")
.putHeader("content-type", "application/x-www-form-urlencoded")
.putHeader("content-length", Integer.toString(postData.length())).write(postData).toObservable()).
//flatmap HttpClientResponse -> Observable<Buffer>
flatMap(httpClientResponse -> {
return httpClientResponse.toObservable();
})......//other operators
request.end();
Notice that I have .end() for the top request. How do I end request that is inside of the .flatmap ? Do I even need to end it ?
There are multiple ways to ensure to call request.end(). But I would dig into documentation of Vert.x or just open source code if there is one, to see if it does call end() for you. Otherwise one could be
final HttpClientRequest request = ...
request.toObservable()
.doOnUnsubscribe(new Action0() {
#Override
public void call() {
request.end();
}
});
I think you can do something like the following code.
The main idea is that you don't directly use the HttpClientRequest as obtained by the Vertx client. Instead you create another flowable that will invoke end() as soon as the first subscription is received.
Here, for instance, you can obtain the request through a pair custom methods: in this case request1() and request2(). They both use doOnSubscribe() to trigger the end() you need. Read its description on the ReactiveX page.
This examle uses vertx and reactivex, I hope you could use this set up.
import io.reactivex.Flowable;
import io.vertx.core.http.HttpMethod;
import io.vertx.reactivex.core.Vertx;
import io.vertx.reactivex.core.buffer.Buffer;
import io.vertx.reactivex.core.http.HttpClient;
import io.vertx.reactivex.core.http.HttpClientRequest;
import io.vertx.reactivex.core.http.HttpClientResponse;
import org.junit.Test;
public class StackOverflow {
#Test public void test(){
Buffer jsonData = Buffer.buffer("..."); // the json data.
HttpClient client = Vertx.vertx().createHttpClient(); // the vertx client.
request1(client)
.flatMap(httpClientResponse -> httpClientResponse.toFlowable())
.map(buffer -> buffer.toString())
.flatMap(postData -> request2(client, postData) )
.forEach( httpResponse -> {
// do something with returned data);
});
}
private Flowable<HttpClientResponse> request1(HttpClient client) {
HttpClientRequest request = client.request(HttpMethod.POST,"someURL");
return request
.toFlowable()
.doOnSubscribe( subscription -> request.end() );
}
private Flowable<HttpClientResponse> request2(HttpClient client, String postData) {
HttpClientRequest request = client.request(HttpMethod.POST,"someURL");
// do something with postData
return request
.toFlowable()
.doOnSubscribe( subscription -> request.end() );
}
}

Categories

Resources