Viewing the exact HTTP Request/Response communication of a JAX-RS client - java

I'm trying to figure out how to debug my tests by viewing the HTTP my client is sending
testPutActivitity(com.lm.controller.JaxRSActivatorTest) Time elapsed: 0.521 sec <<< FAILURE!
java.lang.AssertionError: expected:<404> but was:<200>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at com.lm.controller.JaxRSActivatorTest.testPutActivitity(JaxRSActivatorTest.java:44)
With the new undertow server, some things seem to have changed, so the documentation I'm finding no longer seems to be relevant.
What I'd like to know is how I can view the HTTP Request/Response transmitted from the client? (if it matters I should be using the RESTEasy client)
package com.lm.controller;
import com.lm.test.RESTCtrlTestBase;
import static com.lm.test.TestUtil.testWar;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.WebTarget;
import javax.ws.rs.core.Response;
import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.shrinkwrap.api.Filters;
import org.jboss.shrinkwrap.api.spec.WebArchive;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class RootTest extends RESTCtrlTestBase {
public RootTest() {
}
Client client = ClientBuilder.newClient();
#Deployment( testable = false )
public static WebArchive createDeployment() {
WebArchive ar = testWar()
.addPackages(
false,
Filters.exclude( ".*Test.*" ),
"com.lm.controller",
"com.lm.activity"
);
// System.out.println( "ar = " + ar.toString( true ) );
return ar;
}
#Test
public void getRoot() {
WebTarget target = client.target( deploymentURI );
log.debug( "targetURI = {}", target.getUri() );
Response res = target.request().get();
assertEquals( 200, res.getStatus() );
}
}
I've tried writing an interceptor but it doesn't output the request/response exactly as it is transmitted, also it only seems to get loaded for the server but not the test client above.

Related

How to connect to Redis Cluster using Quarkus Redis and port forwarding?

I'm trying to debug a "change hostname on redirect" issue with Quarkus Redis
This works fine:
package org.acme;
import io.vertx.mutiny.core.Vertx;
import io.vertx.mutiny.redis.client.Command;
import io.vertx.mutiny.redis.client.Redis;
import io.vertx.mutiny.redis.client.Request;
import io.vertx.redis.client.RedisOptions;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.jboss.logging.Logger;
#Path("/redis")
public class RedisCommandsTest {
private static final Logger LOGGER = Logger.getLogger(RedisCommandsTest2.class);
#GET
#Produces(MediaType.TEXT_PLAIN)
public void hello() throws InterruptedException {
var vertx = Vertx.vertx();
final RedisOptions options = new RedisOptions()
.addConnectionString("redis://127.0.0.1:7000")
.addConnectionString("redis://127.0.0.1:7001")
.addConnectionString("redis://127.0.0.1:7002");
Redis.createClient(vertx, options)
.connect()
.subscribe().with(conn -> {
conn.send(Request.cmd(Command.GET).arg("KEY1"))
.subscribe().with(response -> {
System.out.println(response);
});
});
}
}
This doesn't work
//application.properties
quarkus.redis.hosts=redis://127.0.0.1:7000,redis://127.0.0.1:7001,redis://127.0.0.1:7002
quarkus.redis.client-type: cluster
quarkus.redis.max-pool-size: 4
import io.quarkus.redis.client.reactive.ReactiveRedisClient;
reactiveRedisClient.get("KEY1")
.subscribe().with(response -> {
System.out.println(response);
});
Gives this error
2021-11-12 16:40:08,059 ERROR [io.qua.mut.run.MutinyInfrastructure]
(vert.x-eventloop-thread-15) Mutiny had to drop the following
exception: io.vertx.core.impl.NoStackTraceThrowable: Failed to connect
to all nodes of the cluster
Which digging into Vert.x show this to be the actual error
io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No
route to host: /10.100.23.75:6379
Which looks similar to this error but I can't find a solution https://github.com/vert-x3/vertx-redis-client/issues/223
(I'm port forwarding from Kubernetes cluster)
The real question is why does one way work but not the other? Maybe there's some property I'm missing?

All feature files are not executing

Cucumber not executing both the features with created stepdefinations
I have tried with tag, also given both full path of both the features but still the same
package runners;
import com.cucumber.listener.ExtentProperties;
import com.cucumber.listener.Reporter;
import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;
import managers.Common;
import managers.FileReader;
import org.apache.log4j.PropertyConfigurator;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import java.io.File;
#RunWith(Cucumber.class)
#CucumberOptions(
features = ".//src//test//java//FeatureList",glue = "stepDefinations",
plugin = { "com.cucumber.listener.ExtentCucumberFormatter:",
"junit:target/cucumber-results.xml"},
tags={"#API"},
monochrome = true
)
public class TestRunner {
static String ReportName= Common.ReportName();
#BeforeClass
public static void setup() {
ExtentProperties extentProperties = ExtentProperties.INSTANCE;
extentProperties.setReportPath("target/cucumber-reports/"+ReportName+".html");
PropertyConfigurator.configure(".//src//log4j.properties");
}
#AfterClass
public static void writeExtentReport() {
Reporter.loadXMLConfig(new File(FileReader.getInstance().getConfigReader().getReportConfigPath()));
Reporter.setSystemInfo("User Name", System.getProperty("user.name"));
Reporter.setSystemInfo("Time Zone", System.getProperty("user.timezone"));
Reporter.setSystemInfo("Environment", FileReader.getInstance().getConfigReader().getEnvironment());
}
}
Not sure why its always running error codes.feature but never enums.feature
Feature: Enums Codes
#API
Scenario: xxx Enums Codes
Given Run get method "xxxxxxxxxxx" api to get fetch all type of xxx xxx
Then response should be 200
And xxxxxxxxxxxxxx
Feature: Error Codes
#API
Scenario: xxError Codes
Given Run "xxxx" api to get response
Then response should be 200
And Verify xx Error Codes xx error response
"features" is looking for a filesystem path:
features = ".//src//test//java//FeatureList"
Try this -
1 features = "src/test/java/FeatureList"
2 features = "FeatureList"

Use Actors to send data to Akka websockets

I am using Akka websockets to push data to some client.
This is what I have done so far:
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.http.javadsl.ConnectHttp;
import akka.http.javadsl.Http;
import akka.http.javadsl.ServerBinding;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.HttpResponse;
import akka.http.javadsl.model.ws.Message;
import akka.http.javadsl.model.ws.WebSocket;
import akka.japi.Function;
import akka.stream.ActorMaterializer;
import akka.stream.Materializer;
import akka.stream.javadsl.Flow;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
public class Server {
public static HttpResponse handleRequest(HttpRequest request) {
System.out.println("Handling request to " + request.getUri());
if (request.getUri().path().equals("/greeter")) {
final Flow<Message, Message, NotUsed> greeterFlow = greeterHello();
return WebSocket.handleWebSocketRequestWith(request, greeterFlow);
} else {
return HttpResponse.create().withStatus(404);
}
}
public static void main(String[] args) throws Exception {
ActorSystem system = ActorSystem.create();
try {
final Materializer materializer = ActorMaterializer.create(system);
final Function<HttpRequest, HttpResponse> handler = request -> handleRequest(request);
CompletionStage<ServerBinding> serverBindingFuture = Http.get(system).bindAndHandleSync(handler,
ConnectHttp.toHost("localhost", 8080), materializer);
// will throw if binding fails
serverBindingFuture.toCompletableFuture().get(1, TimeUnit.SECONDS);
System.out.println("Press ENTER to stop.");
new BufferedReader(new InputStreamReader(System.in)).readLine();
} finally {
system.terminate();
}
}
public static Flow<Message, Message, NotUsed> greeterHello() {
return Flow.fromSinkAndSource(Sink.ignore(),
Source.single(new akka.http.scaladsl.model.ws.TextMessage.Strict("Hello!")));
}
}
At the client side, I am successfully receiving a 'Hello!' message.
However, now I want to send data dynamically (preferably from an Actor), something like this:
import akka.actor.ActorRef;
import akka.actor.UntypedActor;
public class PushActor extends UntypedActor {
#Override
public void onReceive(Object message) {
if (message instanceof String) {
String statusChangeMessage = (String) message;
// How to push this message to a socket ??
} else {
System.out.println(String.format("'%s':\nReceived unknown message '%s'!", selfActorPath, message));
}
}
}
I am unable to find any example regarding this online.
The following is the software stack being used:
Java 1.8
akka-http 10.0.10
One - not necessarily very elegant - way of doing this is to use Source.actorRef and send the materialized actor somewhere (maybe a router actor?) depending on your requirements.
public static Flow<Message, Message, NotUsed> greeterHello() {
return Flow.fromSinkAndSourceMat(Sink.ignore(),
Source.actorRef(100, OverflowStrategy.fail()),
Keep.right()).mapMaterializedValue( /* send your actorRef to a router? */);
}
Whoever receives the actorRefs of the connected clients must be responsible for routing messages to them.

CXF: how to resend original request from CXF Client?

I’m user of CXF in an SOA development context.
I’m wondering if my problem has a solution with CXF. Here’s my need.
We have developed a webapp serving JAXWS endpoints, the endpoint implementations consists in analyzing the request via interceptors, storing data from the request to a database from the service layer in Java, and resend the original request to another server via a CXF client.
The point is that some of our requests contains a DSIG signature (https://www.w3.org/TR/xmldsig-core/) or an signed SAML Assertion.
Our need is to resend the requests without altering them (such as a proxy) from a CXFClient. CXF uses to send the marshalled object to the server, but this way the original stream is not sent
Is there a way to resend the incoming request from the service layer from a Java CXFClient without altering it (the Signatures depends on the format of the request: blanks, namespaces prefixes, carriage returns…) ? We prefer the CXFClient because we would like to reuse our homemade CXF interceptor which logs the outgoing request.
We have tested an interceptor to intend to replace the outputStream by the original request before sending it to the server, we used this answer: How To Modify The Raw XML message of an Outbound CXF Request?, but we are still KO, CXF still sends the stream made from the marshalled object. See code below.
Context:
- CXF 2.7.18 (JDK 6), and 3.1.10 (JDK 8)
- Platform: windows 7 64bit/ rhel 7 64bit
- Apache Tomcat 7
- Tcpdump to analyse incoming traffic
Sample of code of our client:
final Client cxfClient = org.apache.cxf.frontend.ClientProxy.getClient( portType );
cxfClient.getInInterceptors().clear();
cxfClient.getOutInterceptors().clear();
cxfClient.getOutFaultInterceptors().clear();
cxfClient.getRequestContext().put(CustomStreamerInterceptor.STREAM_TO_SEND,
PhaseInterceptorChain.getCurrentMessage().getContent( InputStream.class ) );
cxfClient.getOutInterceptors().add( new CustomStreamerInterceptor() );
org.apache.cxf.transport.http.HTTPConduit http = (org.apache.cxf.transport.http.HTTPConduit) cxfClient.getConduit();
...
port.doSomething(someRequest);
CustomStreamerInterceptor:
package test;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.io.IOUtils;
import org.apache.cxf.binding.soap.interceptor.SoapOutInterceptor.SoapOutEndingInterceptor;
import org.apache.cxf.helpers.LoadingByteArrayOutputStream;
import org.apache.cxf.interceptor.AbstractOutDatabindingInterceptor;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.io.CacheAndWriteOutputStream;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.Phase;
public class CustomStreamerInterceptor extends AbstractOutDatabindingInterceptor {
public static final String STREAM_TO_SEND = "STREAM_TO_SEND";
public CustomStreamerInterceptor () {
super( Phase.WRITE_ENDING );
addAfter( SoapOutEndingInterceptor.class.getName() );
}
#Override
public void handleMessage( Message message ) throws Fault {
try {
InputStream toSend = (InputStream) message.get( STREAM_TO_SEND );
if ( toSend != null ) {
toSend.reset();
LoadingByteArrayOutputStream lBos = new LoadingByteArrayOutputStream();
IOUtils.copy( toSend, lBos );
CacheAndWriteOutputStream cawos = (CacheAndWriteOutputStream) message.getContent( OutputStream.class );
cawos.resetOut( lBos, false );//fail !
}
}
catch ( Exception e ) {
throw new Fault( e );
}
}
}
Thanks you for any help, it would be very usefull.
I think it's better to create a "classic" HTTP client because CXF is not designed for that kind of situation, it's more common to use it to marshal objects from java to XML...
Fortunately for you I deal with this problem with an interceptor. You can write an interceptor that copy the stream in the output stream object that CXF is preparing to send to the server. You need to be carefull to the phase and the order of you interceptors because if you use a Logging interceptor you may want to log the outgoing stream. This interceptor may do the job, be sure it runs after any logging interceptor. Code for CXF 2.7.18 :
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.lang.CharEncoding;
import org.apache.cxf.helpers.IOUtils;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.interceptor.StaxOutInterceptor;
import org.apache.cxf.message.Exchange;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PassePlatClientInterceptorOut extends AbstractPhaseInterceptor<Message> {
private static final Logger LOG = LoggerFactory.getLogger( PassePlatClientInterceptorOut.class );
private final Exchange exchangeToReadFrom;
public PassePlatClientInterceptorOut( final Exchange exchange ) {
super( Phase.PRE_STREAM );
addBefore( StaxOutInterceptor.class.getName() );
this.exchangeToReadFrom = exchange;
}
#Override
public void handleMessage( Message message ) {
InputStream is = (InputStream) exchangeToReadFrom.get( PassePlatServerInterceptorIn.PASSE_PLAT_INTERCEPTOR_STREAM_SERVEUR );
if ( is != null ) {
message.put( org.apache.cxf.message.Message.ENCODING, CharEncoding.UTF_8 );
OutputStream os = message.getContent( OutputStream.class );
try {
IOUtils.copy( is, os );
is.close();
}
catch ( IOException e ) {
LOG.error( "Error ...", e );
message.setContent( Exception.class, e );
throw new Fault( new Exception( "Error ...", e ) );
}
boolean everythingOK = message.getInterceptorChain().doInterceptStartingAt( message,
org.apache.cxf.interceptor.MessageSenderInterceptor.MessageSenderEndingInterceptor.class.getName() );
if ( !everythingOK ) {
LOG.error( "Error ?" );
throw new Fault( new Exception( "Error ..." ) );
}
}
}
}
To create the interceptor:
cxfClient.getInInterceptors().add( new PassePlatClientInterceptorIn( exchange ) );

PoolingHttpClientConnectionManager does not release connections

I am using Spring to achieve the following:
On a server, I receive data via a REST interface in an XML-Format. I want to transform the data into JSON and POST it to another Server. My code (I removed some sensitive classnames/URLs to avoid the wrath of my employer) looks like this:
Main/Configuration class:
package stateservice;
import org.apache.http.HttpHost;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.http.client.HttpComponentsClientHttpRequestFactory;
import org.springframework.web.client.RestTemplate;
#SpringBootApplication
public class App {
Logger log = LoggerFactory.getLogger(App.class);
public static void main(String[] args) {
System.out.println("Start!");
SpringApplication.run(StateServiceApplication.class, args);
System.out.println("End!");
}
#Bean
public RestTemplate restTemplate() {
log.trace("restTemplate()");
HttpHost proxy = new HttpHost("proxy_url", 8080);
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
// Increase max total connection to 200
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(50);
RequestConfig requestConfig = RequestConfig.custom().setProxy(proxy).build();
HttpClientBuilder httpClientBuilder = HttpClientBuilder.create();
httpClientBuilder.setDefaultRequestConfig(requestConfig);
httpClientBuilder.setConnectionManager(cm);
HttpComponentsClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory(
httpClientBuilder.build());
return new RestTemplate(requestFactory);
}
}
The class representing the RESTful interface:
package stateservice;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import foo.bar.XmlData
#RestController
public class StateController {
private static Logger log = LoggerFactory.getLogger(DataController.class);
#Autowired
ForwarderService forwarder;
#RequestMapping(value = "/data", method = RequestMethod.POST)
public String postState(#RequestBody XmlData data) {
forwarder.forward(data);
return "Done!";
}
}
Finally, the Forwarder:
package stateservice;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;
import foo.bar.Converter;
import foo.bar.XmlData;
#Service
public class ForwarderService {
private static Logger log = LoggerFactory.getLogger(ForwarderService.class);
String uri = "forward_uri";
#Autowired
RestTemplate restTemplate;
#Async
public String forward(XmlData data) {
log.trace("forward(...) - start");
String json = Converter.convert(data);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
ResponseEntity<String> response = restTemplate.postForEntity(uri,
new HttpEntity<String>(json, headers), String.class);
// responseEntity.getBody();
// log.trace(responseEntity.toString());
log.trace("forward(...) - end");
return response.getBody();
}
}
However, the Connection Manager seldomly seems to release connections for reuse, and additionally, the system gets flooded with connections in the CLOSE_WAIT state (which can be seen using netstat). All connections in the pool get leased, but not released, and as soon as the number of connections in the CLOSE_WAIT state reaches the ulimit, I get 'Too many open files'-exceptions
Because of the multithreaded nature of the code, I suspect that sockets cannot be closed/connections be released, because some other thread is somhow blocking them.
I would really appreciate any help or any hint you can give me to solve the problem.
There is a trick with Apache HttpEntity - to release locked connection - response has to be FULLY consumed and closed. See EntityUtils and HttpEntity docs for details:
EntityUtils.consume(response);
Since version 4.3 Apache HttpClient releases connection back to the pool when #close() method is called on the CloseableHttpResponse.
However this feature is supported by Spring Web only since version 4.0.0-RELEASE, see method #close() in HttpComponentsClientHttpResponse.java:
#Override
public void close() {
// Release underlying connection back to the connection manager
try {
try {
// Attempt to keep connection alive by consuming its remaining content
EntityUtils.consume(this.httpResponse.getEntity());
} finally {
// Paranoia
this.httpResponse.close();
}
}
catch (IOException ignore) {
}
}
The key to success is the line marked by "// Paranoia" - explicit .close() call. It actually releases connection back to pool.

Categories

Resources