If I run following these two tests I get the error.
1st test
#Rule
public GrpcCleanupRule grpcCleanup = new GrpcCleanupRule();
#Test
public void findAll() throws Exception {
// Generate a unique in-process server name.
String serverName = InProcessServerBuilder.generateName();
// Create a server, add service, start, and register for automatic graceful shutdown.
grpcCleanup.register(InProcessServerBuilder
.forName(serverName)
.directExecutor()
.addService(new Data(mockMongoDatabase))
.build()
.start());
// Create a client channel and register for automatic graceful shutdown.
RoleServiceGrpc.RoleServiceBlockingStub stub = RoleServiceGrpc.newBlockingStub(
grpcCleanup.register(InProcessChannelBuilder
.forName(serverName)
.directExecutor()
.build()));
RoleOuter.Response response = stub.findAll(Empty.getDefaultInstance());
assertNotNull(response);
}
2nd test
#Test
public void testFindAll() {
ManagedChannel channel = ManagedChannelBuilder.forAddress("localhost", 8081)
.usePlaintext()
.build();
RoleServiceGrpc.RoleServiceBlockingStub stub = RoleServiceGrpc.newBlockingStub(channel);
RoleOuter.Response response = stub.findAll(Empty.newBuilder().build());
assertNotNull(response);
}
io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
cleanQueue SEVERE: ~~~ Channel ManagedChannelImpl{logId=1,
target=localhost:8081} was not shutdown properly!!! ~~~
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
java.lang.RuntimeException: ManagedChannel allocation site
at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:94)
If I comment out one of them, then no errors, unit tests pass though but the exception is thrown if both are ran together.
Edit
Based on the suggestion.
#Test
public void testFindAll() {
ManagedChannel channel = ManagedChannelBuilder.forAddress("localhost", 8081)
.usePlaintext()
.build();
RoleServiceGrpc.RoleServiceBlockingStub stub = RoleServiceGrpc.newBlockingStub(channel);
RoleOuter.Response response = stub.findAll(Empty.newBuilder().build());
assertNotNull(response);
channel.shutdown();
}
Hey I just faced similar issue using Dialogflow V2 Java SDK where I received the error
Oct 19, 2019 4:12:23 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=41, target=dialogflow.googleapis.com:443} was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
Also, Having a huge customer base we started running into out of memory unable to create native thread error.
After performing a lot of Debugging operations and Using Visual VM Thread Monitoring I finally figured out that the problem was because of SessionsClient not closing. So I used the attached code block to solve that issue. Post testing that block I was finally able to free up all the used threads and also the error mentioned earlier was resolved.
SessionsClient sessionsClient = null;
QueryResult queryResult = null;
try {
SessionsSettings.Builder settingsBuilder = SessionsSettings.newBuilder();
SessionsSettings sessionsSettings = settingsBuilder
.setCredentialsProvider(FixedCredentialsProvider.create(credentials)).build();
sessionsClient = SessionsClient.create(sessionsSettings);
SessionName session = SessionName.of(projectId, senderId);
com.google.cloud.dialogflow.v2.TextInput.Builder textInput = TextInput.newBuilder().setText(message)
.setLanguageCode(languageCode);
QueryInput queryInput = QueryInput.newBuilder().setText(textInput).build();
DetectIntentResponse response = sessionsClient.detectIntent(session, queryInput);
queryResult = response.getQueryResult();
} catch (Exception e) {
e.printStackTrace();
}
finally {
sessionsClient.close();
}
The shorter values on the graph highlights the use of client.close(). Without that the threads were stuck in Parking State.
I had a similar issue recently using Google cloud task API.
Channel ManagedChannelImpl{logId=5, target=cloudtasks.googleapis.com:443} was
not shutdown properly!!! ~*~*~* (ManagedChannelOrphanWrapper.java:159)
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
In this case CloudTasksClient object implements AutoCloseable and we should call its .close() method after its done.
We can use a try block like this which would auto close when it's done.
try( CloudTasksClient client = CloudTasksClient.create()){
CloudTaskQueue taskQueue = new CloudTaskQueue(client);
}
or Add try/finally
CloudTasksClient client =null;
try{
client = CloudTasksClient.create() ;
CloudTaskQueue taskQueue = new CloudTaskQueue(client);
} catch (IOException e) {
e.printStackTrace();
} finally {
client.close();
}
In my case, I just shutdown the channel in try,finally block:
ManagedChannel channel = ManagedChannelBuilder.forAddress...
try{
...
}finally {
channel.shutdown();
}
Related
I'm working on a POC using LaunchDarkly's Java + Redis SDK and one of my requirements is initializing a 2nd LaunchDarkly client in "offline" mode. Due to my existing architecture one application will connect to LaunchDarkly and hydrate a Redis instance. The 2nd application will connect to the same data store, but the client will initialize as "offline" -- is there currently a way for me to read stored events from the offline client and flush them to the LaunchDarkly servers?
In the code snippet below I am initializing the first client + redis store, then initializing a 2nd client in a background thread that connects to the same local redis instance. I can confirm that when I run this snippet I do not see events populate in the LaunchDarkly UI.
NOTE: this is POC to determine whether LaunchDarkly will work for my use case. It is not a Production-grade implementation.
public static void main(String[] args) throws IOException {
LDConfig config = new LDConfig.Builder().dataStore(Components
.persistentDataStore(
Redis.dataStore().uri(URI.create("redis://127.0.0.1:6379")).prefix("my-key-prefix"))
.cacheSeconds(30)).build();
LDClient ldClient = new LDClient("SDK-KEY", config);
Runnable r = new Runnable() {
#Override
public void run() {
LDConfig offlineConfig = new LDConfig.Builder().dataStore(Components
.persistentDataStore(
Redis.dataStore().uri(URI.create("redis://127.0.0.1:6379")).prefix("my-key-prefix"))
.cacheSeconds(30)).offline(true).build();
LDClient offlineClient = new LDClient("SDK-KEY", offlineConfig);
String uniqueId = "abcde";
LDUser user = new LDUser.Builder(uniqueId).custom("customField", "customValue").build();
boolean showFeature = offlineClient.boolVariation("test-feature-flag", user, false);
if (showFeature) {
System.out.println("Showing your feature");
} else {
System.out.println("Not showing your feature");
}
try {
offlineClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
};
ExecutorService executor = Executors.newCachedThreadPool();
executor.submit(r);
executor.shutdown();
ldClient.close();
}
HyperLeger Sawtooth supports subscription to events in the Transaction Processor. However is there a way to create application-specific events in the Transaction Processor something like in Python example here: https://www.jacklllll.xyz/blog/2019/04/08/sawtooth/
ctx.addEvent(
'agreement/create',
[['name', 'agreement'],
['address', address],
['buyer name', agreement.BuyerName],
['seller name', agreement.SellerName],
['house id', agreement.HouseID],
['creator', signer]],
null)
In the current Sawtooth-Java SDK v0.1.2 the only override is
apply(TpProcessRequest, State)
Without the context. However on the documentation here: https://github.com/hyperledger/sawtooth-sdk-java/blob/master/sawtooth-sdk-transaction-processor/src/main/java/sawtooth/sdk/processor/TransactionHandler.java
addEvent(TpProcessRequest, Context)
So far I have managed to listen to events sawtooth/state-delta however this gives me all state changes of that tx-family
import sawtooth.sdk.protobuf.EventSubscription;
import sawtooth.sdk.protobuf.EventFilter;
import sawtooth.sdk.protobuf.ClientEventsSubscribeRequest;
import sawtooth.sdk.protobuf.ClientEventsSubscribeResponse;
import sawtooth.sdk.protobuf.ClientEventsUnsubscribeRequest;
import sawtooth.sdk.protobuf.Message;
EventFilter filter = EventFilter.newBuilder()
.setKey("address")
.setMatchString(nameSpace.concat(".*"))
.setFilterType(EventFilter.FilterType.REGEX_ANY)
.build();
EventSubscription subscription = EventSubscription.newBuilder()
.setEventType("sawtooth/state-delta")
.addFilters(filter)
.build();
context = new ZContext();
socket = context.createSocket(ZMQ.DEALER);
socket.connect("tcp://sawtooth-rest:4004");
ClientEventsSubscribeRequest request = ClientEventsSubscribeRequest.newBuilder()
.addSubscriptions(subscription)
.build();
message = Message.newBuilder()
.setCorrelationId("123")
.setMessageType(Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST)
.setContent(request.toByteString())
.build();
socket.send(message.toByteArray());
Once the Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST is registered I get messages in a thread loop.
I was hoping that in the TransactionHandler I should be able to addEvent() or create some type of event(s) that can then be subscribed using the Java SDK.
Has anyone else tried creating custom events in JAVA on Sawtooth?
Here's an example of an event being added in Python. Java would be similar.
You add your custom-named event in your Transaction Processor:
context.add_event(event_type="cookiejar/bake", attributes=[("cookies-baked", amount)])
See https://github.com/danintel/sawtooth-cookiejar/blob/master/pyprocessor/cookiejar_tp.py#L138
Here are examples of event handlers written in Python and Go:
https://github.com/danintel/sawtooth-cookiejar/tree/master/events
Java would also be similar. Basically the logic in the event handler is:
Subscribe to the events you want to listen to
Send the request to the Validator
Read and parse the subscription response
In a loop, listen to subscribed events in a loop
After exiting the loop (if ever), unsubscribe rom events
For those who are trying to use the java SDK for event publishing/subscribing - There is no direct API available. Atleast I cound't find it and I am using 1.0 docker images.
So to publish your events you need to publish directly to the sawtooth rest-api server. Need to take care of following:
You need a context id which is valid only per request. You get this from request in your apply() method. (Code below). So make sure you publish the event during transaction publishing i.e. during the implementation of apply() method
Event structure will be as described in docs here
If the transaction is successful and block is committed you get the event in event subscriber else it doesn't show up.
While creating a subscriber you need to subscribe to the
sawtooth/block-commit
event and add an additional subscription to your type of event e.g. "myNS/my-event"
Sample Event Publishing code:
public void apply(TpProcessRequest request, State state) throws InvalidTransactionException, InternalError {
///process your trasaction first
sawtooth.sdk.messaging.Stream eventStream = new Stream("tcp://localhost:4004"); // make this in the constructor of class NOT here
List<Attribute> attrList = new ArrayList<>();
Attribute attrs = Attribute.newBuilder().setKey("someKey").setValue("someValue").build();
attrList.add(attrs);
Event appEvent = Event.newBuilder().setEventType("myNS/my-event-type")
.setData( <some ByteString here>).addAllAttributes(attrList).build();
TpEventAddRequest addEventRequest = TpEventAddRequest.newBuilder()
.setContextId(request.getContextId()).setEvent(appEvent).build();
Future sawtoothSubsFuture = eventStream.send(MessageType.TP_EVENT_ADD_REQUEST, addEventRequest.toByteString());
try {
System.out.println(sawtoothSubsFuture.getResult());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
then you subscribe to events as such (inspired from the marketplace samples):
try {
EventFilter eventFilter = EventFilter.newBuilder().setKey("address")
.setMatchString(String.format("^%s.*", "myNamespace"))
.setFilterType(FilterType.REGEX_ANY).build();
//subscribe to sawtooth/block-commit
EventSubscription deltaSubscription = EventSubscription.newBuilder().setEventType("sawtooth/block-commit")
.addFilters(eventFilter)
.build();
EventSubscription mySubscription = EventSubscription.newBuilder().setEventType("myNS/my-event-type")
.build(); //no filters added for my events.
ClientEventsSubscribeRequest subsReq = ClientEventsSubscribeRequest.newBuilder()
.addLastKnownBlockIds("0000000000000000").addSubscriptions(deltaSubscription).addSubscriptions(mySubscription)
.build();
Future sawtoothSubsFuture = eventStream.send(MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST,
subsReq.toByteString());
ClientEventsSubscribeResponse eventSubsResp = ClientEventsSubscribeResponse
.parseFrom(sawtoothSubsFuture.getResult());
System.out.println("eventSubsResp.getStatus() :: " + eventSubsResp.getStatus());
if (eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.UNKNOWN_BLOCK)) {
System.out.println("Unknown block ");
// retry connection if this happens by calling this same method
}
if(!eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.OK)) {
System.out.println("Subscription failed with status " + eventSubsResp.getStatus());
throw new RuntimeException("cannot connect ");
} else {
isActive = true;
System.out.println("Making active ");
}
while(isActive) {
Message eventMsg = eventStream.receive();
EventList eventList = EventList.parseFrom(eventMsg.getContent());
for (Event event : eventList.getEventsList()) {
System.out.println("An event ::::");
System.out.println(event);
}
}
} catch (Exception e) {
e.printStackTrace();
}
We connect to a data provider and keep reading XML data in an endless loop using HttpClient. We already set connection timeout and socket timeout in the Java code. However, when the connection is closed on the HTTP server side, our application does not throw any exception and just hangs there doing nothing. Because of this behavior, when the server is up the Java code will not reconnect. Is there any way to see whether the socket connection was already closed by the server side?
In addition, https://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html mentioned how to close STALE connections, but it still does not solve our problem.
Thank you very much in advance!
Code snippet:
private static final HttpClientConnectionManager CONN_MGR = new BasicHttpClientConnectionManager();
public boolean connectToServer() {
disconnect();
CredentialsProvider provider = new BasicCredentialsProvider();
UsernamePasswordCredentials credentials = new UsernamePasswordCredentials(username, password);
provider.setCredentials(AuthScope.ANY, credentials);
RequestConfig requestConfig = RequestConfig.custom().setConnectTimeout(15 * 1000)
.setSocketTimeout(15 * 1000).build();
HttpClient client = HttpClientBuilder.create().setDefaultCredentialsProvider(provider)
.setSSLSocketFactory(sslsf)
.setConnectionManager(CONN_MGR)
.setDefaultRequestConfig(requestConfig).build();
HttpGet request = new HttpGet(url);
try {
HttpResponse response = client.execute(request);
in = response.getEntity().getContent();
reader = new BufferedReader(new InputStreamReader(in));
connected = true;
return true;
} catch (Exception re) {
LOGGER.error("error", re);
}
return false;
}
public void process() {
String xml = null;
while (!shuttingDown) {
try {
/* if we know the server side closed the connection already here
we can simply return and scheduler will take care of anything else. */
xml = reader.readLine();
lastSeen = System.currentTimeMillis();
if (StringUtils.isBlank(xml)) {
continue;
}
xml = xml.trim();
// processing XML here
} catch (IOException | NullPointerException ie) {
/* We see SocketTimeoutException relatively often and
* sometimes NullPointerException in reader.readLine(). */
if (!shuttingDown) {
if(!connectToServer()) {
break;
}
}
} catch (RuntimeException re) {
LOGGER.error("other RuntimeException", re);
break;
}
}
// the scheduler will start another processing.
disconnect();
}
// called by the scheduler periodically
public boolean isConnected() {
CONN_MGR.closeExpiredConnections();
CONN_MGR.closeIdleConnections(15, TimeUnit.SECONDS);
long now = System.currentTimeMillis();
if (now - lastSeen > 15000L) {
LOGGER.info("call disconnect() from isConnected().");
disconnect();
}
return connected;
}
I think with the introduction of Automatic Resource management in java 1.7
https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html
cleaning up of stale connections has been taken care and even on the worst case it has been taken care by the implementation providers carefully by monitoring the connection
Apache HttpClient uses HttpClientConnectionManager and other utility classes that does the job of cleaning the stale connections.
After the last catch block add a finally block for your cleanup code.
finally {
CONN_MGR.shutdown();
}
As per documentation it shuts down connection manager and releases allocated resources
We avoided the problem by closing the connection from another thread when no data coming in for more than 15 seconds. More specifically now isConnected() method is as follows:
// called by the scheduler periodically
public boolean isConnected() {
long now = System.currentTimeMillis();
if (now - lastSeen > 15000L) {
disconnect();
}
return connected;
}
Of course, the reading part get will java.net.SocketException: Socket closed or java.lang.IllegalStateException: Connection is still allocated when this happens.
After this change everything works fine.
I am using Spring RestTemplate to make a HTTP Calls to my RestService. I am using spring framework 3.2.8 version of RestTemplate. I cannot upgrade this since in our company we have a parent POM in which we are using Spring Framework version 3.2.8 so I need to stick to that.
Let's say I have two machines:
machineA: This machine is running my code which uses RestTemplate as my HttpClient and from this machine I make HTTP Calls to my RestService which is running on a different machine (machineB). I have wrapped the below code around multithreaded application so that I can do load and performance testing on my client code.
machineB: On this machine, I am running my RestService.
Now the problem I am seeing is whenever I run a load and performance testing on machineA - Meaning, my client code will make lot of HTTPClient calls to the RestService running on machineB very fast since the client code is getting called in a multithreaded way.
I always see lot of TIME_WAIT connections on machineA as shown below:
298 ESTABLISHED
14 LISTEN
2 SYN_SENT
10230 TIME_WAIT
291 ESTABLISHED
14 LISTEN
1 SYN_SENT
17767 TIME_WAIT
285 ESTABLISHED
14 LISTEN
1 SYN_SENT
24055 TIME_WAIT
I don't think it's a good sign that we have lot of TIME_WAIT connections here.
Problem Statement:-
What does this high TIME_WAIT connection mean here in a simple language on machineA?
Is there any reason why this is happening with RestTemplate or is it just the way I am using RestTemplate? If I am doing anything wrong in the way I am using RestTemplate, then what's the right way to use it?
Do I need to set any keep-alive header or Connection:Close thing while using RestTemplate? Any inputs/suggestions are greatly appreciated as I am confuse what's going on here.
Below is how I am using RestTemplate in my code base in a simple way (just to explain the whole idea of how I am using RestTemplate):
public class DataClient implements Client {
private final RestTemplate restTemplate = new RestTemplate();
private ExecutorService executor = Executors.newFixedThreadPool(10);
// for synchronous call
#Override
public String getSyncData(DataKey key) {
String response = null;
Future<String> handler = null;
try {
handler = getAsyncData(key);
response = handler.get(100, TimeUnit.MILLISECONDS); // we have a 100 milliseconds timeout value set
} catch (TimeoutException ex) {
// log an exception
handler.cancel(true);
} catch (Exception ex) {
// log an exception
}
return response;
}
// for asynchronous call
#Override
public Future<String> getAsyncData(DataKey key) {
Future<String> future = null;
try {
Task task = new Task(key, restTemplate);
future = executor.submit(task);
} catch (Exception ex) {
// log an exception
}
return future;
}
}
And below is my simple Task class
class Task implements Callable<String> {
private final RestTemplate restTemplate;
private final DataKey key;
public Task(DataKey key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
public String call() throws Exception {
ResponseEntity<String> response = null;
String url = "some_url_created_by_using_key";
// handling all try catch here
response = restTemplate.exchange(url, HttpMethod.GET, null, String.class);
return response.getBody();
}
}
"TIME_WAIT" is the state that a TCP connection mantains during a configurable amount of time after closed (FIN/FIN reception). In this way, a possible "delayed" packet of one connection can not be mixed with a latter connection that reuses same port.
In a high-traffic test, it is normal to have a lot of them, but they should disappear after a few minutes test finished.
I'm using MockWebServer library in my Android JUnit tests. I'm testing an SDK that makes calls to a server. So I'm using MockWebServer to override these server URLs and capture what the SDK is sending to make assertions on it.
The problem that I'm running into is that if I try to do server.takeRequest() and assign it to a new RecordedRequest variable, the test hangs up on the second server.takeRequest() and sometimes, even on the first one -- if I run it on an emulator it hangs on the first server.takeRequest() method but if I run it on my physical Android device, it freezes on the second server.takeRequest() method.
public void testSomething() {
final MockWebServer server = new MockWebServer();
try {
server.play();
server.enqueue(new MockResponse().setBody("")
.setResponseCode(HttpURLConnection.HTTP_INTERNAL_ERROR));
server.enqueue(new MockResponse().setBody("")
.setResponseCode(HttpURLConnection.HTTP_OK));
server.enqueue(new MockResponse().setBody("")
.setResponseCode(HttpURLConnection.HTTP_OK));
URL url = server.getUrl("/");
// This internal method overrides some of the hardcoded URLs
// within the SDK that I'm testing.
Util.overrideUrls(url.toString())
// Do some server calls via the SDK utilizing the mock server url.
RecordedRequest requestFor500 = server.takeRequest();
// Do some assertions with 'requestFor500'
// Do some more server calls via the SDK utilizing the mock server url.
/*
* This is the part where the JUnit test hangs or seems to go into
* an infinite loop and never recovers
*/
RecordedRequest requestAfter500Before200 = server.takeRequest();
} catch {
...
}
}
Am I doing something wrong or is this some type of bug with MockWebServer?
Add timeout to MockWebServer so that it does not hang
server.takeRequest(1, TimeUnit.SECONDS);
There seems to be a problem with MockWebServer's dispatch queue, which freezes for some reason when serving responses which are not 200 or 302. I have solved this by providing a custom dispatcher:
MockWebServer server = ...;
final MockResponse response = new MockResponse().setResponseCode(401);
server.setDispatcher(new Dispatcher() {
#Override
public MockResponse dispatch(RecordedRequest request)
throws InterruptedException {
return response; // this could have been more sophisticated
}
});
Tested with MockWebServer 2.0.0