How to read Meta data in gRPC using Java at client side - java

I am using Java and Protoc 3.0 compiler and my proto file is mention below.
https://github.com/openconfig/public/blob/master/release/models/rpc/openconfig-rpc-api.yang
syntax = "proto3";
package Telemetry;
// Interface exported by Agent
service OpenConfigTelemetry {
// Request an inline subscription for data at the specified path.
// The device should send telemetry data back on the same
// connection as the subscription request.
rpc telemetrySubscribe(SubscriptionRequest) returns (stream OpenConfigData) {}
// Terminates and removes an exisiting telemetry subscription
rpc cancelTelemetrySubscription(CancelSubscriptionRequest) returns (CancelSubscriptionReply) {}
// Get the list of current telemetry subscriptions from the
// target. This command returns a list of existing subscriptions
// not including those that are established via configuration.
rpc getTelemetrySubscriptions(GetSubscriptionsRequest) returns (GetSubscriptionsReply) {}
// Get Telemetry Agent Operational States
rpc getTelemetryOperationalState(GetOperationalStateRequest) returns (GetOperationalStateReply) {}
// Return the set of data encodings supported by the device for
// telemetry data
rpc getDataEncodings(DataEncodingRequest) returns (DataEncodingReply) {}
}
// Message sent for a telemetry subscription request
message SubscriptionRequest {
// Data associated with a telemetry subscription
SubscriptionInput input = 1;
// List of data models paths and filters
// which are used in a telemetry operation.
repeated Path path_list = 2;
// The below configuration is not defined in Openconfig RPC.
// It is a proposed extension to configure additional
// subscription request features.
SubscriptionAdditionalConfig additional_config = 3;
}
// Data associated with a telemetry subscription
message SubscriptionInput {
// List of optional collector endpoints to send data for
// this subscription.
// If no collector destinations are specified, the collector
// destination is assumed to be the requester on the rpc channel.
repeated Collector collector_list = 1;
}
// Collector endpoints to send data specified as an ip+port combination.
message Collector {
// IP address of collector endpoint
string address = 1;
// Transport protocol port number for the collector destination.
uint32 port = 2;
}
// Data model path
message Path {
// Data model path of interest
// Path specification for elements of OpenConfig data models
string path = 1;
// Regular expression to be used in filtering state leaves
string filter = 2;
// If this is set to true, the target device will only send
// updates to the collector upon a change in data value
bool suppress_unchanged = 3;
// Maximum time in ms the target device may go without sending
// a message to the collector. If this time expires with
// suppress-unchanged set, the target device must send an update
// message regardless if the data values have changed.
uint32 max_silent_interval = 4;
// Time in ms between collection and transmission of the
// specified data to the collector platform. The target device
// will sample the corresponding data (e.g,. a counter) and
// immediately send to the collector destination.
//
// If sample-frequency is set to 0, then the network device
// must emit an update upon every datum change.
uint32 sample_frequency = 5;
}
// Configure subscription request additional features.
message SubscriptionAdditionalConfig {
// limit the number of records sent in the stream
int32 limit_records = 1;
// limit the time the stream remains open
int32 limit_time_seconds = 2;
}
// Reply to inline subscription for data at the specified path is done in
// two-folds.
// 1. Reply data message sent out using out-of-band channel.
// 2. Telemetry data send back on the same connection as the
// subscription request.
// 1. Reply data message sent out using out-of-band channel.
message SubscriptionReply {
// Response message to a telemetry subscription creation or
// get request.
SubscriptionResponse response = 1;
// List of data models paths and filters
// which are used in a telemetry operation.
repeated Path path_list = 2;
}
// Response message to a telemetry subscription creation or get request.
message SubscriptionResponse {
// Unique id for the subscription on the device. This is
// generated by the device and returned in a subscription
// request or when listing existing subscriptions
uint32 subscription_id = 1;
}
// 2. Telemetry data send back on the same connection as the
// subscription request.
message OpenConfigData {
// router name:export IP address
string system_id = 1;
// line card / RE (slot number)
uint32 component_id = 2;
// PFE (if applicable)
uint32 sub_component_id = 3;
// Path specification for elements of OpenConfig data models
string path = 4;
// Sequence number, monotonically increasing for each
// system_id, component_id, sub_component_id + path.
uint64 sequence_number = 5;
// timestamp (milliseconds since epoch)
uint64 timestamp = 6;
// List of key-value pairs
repeated KeyValue kv = 7;
}
// Simple Key-value, where value could be one of scalar types
message KeyValue {
// Key
string key = 1;
// One of possible values
oneof value {
double double_value = 5;
int64 int_value = 6;
uint64 uint_value = 7;
sint64 sint_value = 8;
bool bool_value = 9;
string str_value = 10;
bytes bytes_value = 11;
}
}
// Message sent for a telemetry subscription cancellation request
message CancelSubscriptionRequest {
// Subscription identifier as returned by the device when
// subscription was requested
uint32 subscription_id = 1;
}
// Reply to telemetry subscription cancellation request
message CancelSubscriptionReply {
// Return code
ReturnCode code = 1;
// Return code string
string code_str = 2;
};
// Result of the operation
enum ReturnCode {
SUCCESS = 0;
NO_SUBSCRIPTION_ENTRY = 1;
UNKNOWN_ERROR = 2;
}
// Message sent for a telemetry get request
message GetSubscriptionsRequest {
// Subscription identifier as returned by the device when
// subscription was requested
// --- or ---
// 0xFFFFFFFF for all subscription identifiers
uint32 subscription_id = 1;
}
// Reply to telemetry subscription get request
message GetSubscriptionsReply {
// List of current telemetry subscriptions
repeated SubscriptionReply subscription_list = 1;
}
// Message sent for telemetry agent operational states request
message GetOperationalStateRequest {
// Per-subscription_id level operational state can be requested.
//
// Subscription identifier as returned by the device when
// subscription was requested
// --- or ---
// 0xFFFFFFFF for all subscription identifiers including agent-level
// operational stats
// --- or ---
// If subscription_id is not present then sent only agent-level
// operational stats
uint32 subscription_id = 1;
// Control verbosity of the output
VerbosityLevel verbosity = 2;
}
// Verbosity Level
enum VerbosityLevel {
DETAIL = 0;
TERSE = 1;
BRIEF = 2;
}
// Reply to telemetry agent operational states request
message GetOperationalStateReply {
// List of key-value pairs where
// key = operational state definition
// value = operational state value
repeated KeyValue kv = 1;
}
// Message sent for a data encoding request
message DataEncodingRequest {
}
// Reply to data encodings supported request
message DataEncodingReply {
repeated EncodingType encoding_list = 1;
}
// Encoding Type Supported
enum EncodingType {
UNDEFINED = 0;
XML = 1;
JSON_IETF = 2;
PROTO3 = 3;
}
In order to do the service call (rpc TelemetrySubscribe) first i need to read header which have subscription id and then start reading messages. Now, using Java i am able to connect with the service, i did introduce the interceptor but when i print/retrieve header it is null. My code of calling interceptor is below,
ClientInterceptor interceptor = new HeaderClientInterceptor();
originChannel = OkHttpChannelBuilder.forAddress(host, port)
.usePlaintext(true)
.build();
Channel channel = ClientInterceptors.intercept(originChannel, interceptor);
telemetryStub = OpenConfigTelemetryGrpc.newStub(channel);
This is interceptor code to read meta Data.
#Override
public <ReqT, RespT> ClientCall<ReqT, RespT> interceptCall(MethodDescriptor<ReqT, RespT> method,
CallOptions callOptions, Channel next) {
return new SimpleForwardingClientCall<ReqT, RespT>(next.newCall(method, callOptions)) {
#Override
public void start(Listener<RespT> responseListener, Metadata headers) {
super.start(new SimpleForwardingClientCallListener<RespT>(responseListener) {
#Override
public void onHeaders(Metadata headers) {
Key<String> CUSTOM_HEADER_KEY = Metadata.Key.of("responseKEY", Metadata.ASCII_STRING_MARSHALLER);
System.out.println("Contains Key?? "+headers.containsKey(CUSTOM_HEADER_KEY));
Wondering is there any other way to read meta data or first message which have subscription ID in it? All i need to read first message which have subscription Id, and return the same subscription id to server so that streaming can start I have equivalent Python code using same proto file and it is communicating with server by code mention below for reference only:
sub_req = SubscribeRequestMsg("host",port)
data_itr = stub.telemetrySubscribe(sub_req, _TIMEOUT_SECONDS)
metadata = data_itr.initial_metadata()
if metadata[0][0] == "responseKey":
metainfo = metadata[0][1]
print metainfo
subreply = agent_pb2.SubscriptionReply()
subreply.SetInParent()
google.protobuf.text_format.Merge(metainfo, subreply)
if subreply.response.subscription_id:
SUB_ID = subreply.response.subscription_id
From the python code above i can easily retrieve meta data object, not sure how to retrieve same using Java?
After reading metaData all i am getting is: Metadata({content-type=[application/grpc], grpc-encoding=[identity], grpc-accept-encoding=[identity,deflate,gzip]})
But i know there is one more line from meta data to it, which is
response {
subscription_id: 2
}
How can i extract last response from Header which have subscription id in it. I did try many options and i am lost here.

The method you used is for request metadata, not response metadata:
public void start(Listener<RespT> responseListener, Metadata headers) {
For response metadata, you will need a ClientCall.Listener and wait for the onHeaders callback:
public void onHeaders(Metadata headers)
I do feel like the usage of metadata you mention seems strange. Metadata is generally for additional error details or cross-cutting features that aren't specific to the RPC method (like auth, tracing, etc.).

Often times using the ClientInterceptor is inconvenient because you need to maintain a reference to it in order to pull the data back out. In your case, the data is actually Metadata. One way you can get access to the Metadata easier is by putting it inside of the Context.
For example, you could create a Context.Key for the subscription id. In your client interceptor, you could extract the Metadata header that you want, and put it inside the Context, using Context.current().withValue(key, metadata). Inside your StreamObserver, you can extract this This by calling key.get(Context.current()). This assumes you are using the Async API, rather than the blocking API.
The reason it is more difficult is because usually metadata is information about a call, but not directly related to the call itself. It is for things like tracing, encoding, stats, cancellation and things like that. If something changes the way you handle the request, it probably needs to go directly into the request itself, rather than being on the side.

In case this helps anyone else out:
I needed to write a specific response header in my gRPC-java server based upon the request/response.
What I ended up doing was storing the response header value in a Context using the Context::withValue (which doesn't modify the existing context but instead creates a new Context actually), and then calling the request service handler method's StreamObserver::onNext inside of the Context::run callback. StremableObserver::onNext calls the ServerCall::sendHeaders I have in my set from my ServerInterceptor. There in the sendHeaders, it can read the value in the Context I stored and set the response header value.
I think this is similar to #carl-mastrangelo's approach, just may be spelled out a little bit more.
public enum MyServerInterceptor implements ServerInterceptor {
INSTANCE;
public static final Metadata.Key<String> METADATA_KEY =
Metadata.Key.of("fish", ASCII_STRING_MARSHALLER);
public static final Context.Key<String> CONTEXT_KEY = Context.key("dog");
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(ServerCall<ReqT, RespT> call,
Metadata requestHeaders,
ServerCallHandler<ReqT, RespT> next) {
ServerCall<ReqT, RespT> myServerCall = new ForwardingServerCall.SimpleForwardingServerCall<ReqT, RespT>(call) {
#Override
public void sendHeaders(Metadata responseHeaders) {
String value = CONTEXT_KEY.get(Context.current());
responseHeaders.put(METADATA_KEY, value);
super.sendHeaders(responseHeaders);
}
};
return next.startCall(myServerCall, requestHeaders);
}
}
public class MyService extends MyServiceGrpc.MyServiceyImplBase {
#Override
public void serviceMethod(MyRequest request, StreamObserver<MyResponse> responseObserver) {
MyResponse response = new MyResponse();
Context.current().withValue(Context.key("dog"), "cat").run(() -> {
responseObserver.onNext(response);
});
responseObserver.onCompleted();
}
}
The again the critical piece here is that responseObserver::onNext calls into ForwardingServerCall.SimpleForwardingServerCall::sendHeaders.
Please let me know if there is a better way. This all seems more complex than I'd like.

Related

Communication issues between two Netty socket applications (partial messages) from socket-based communication

I'm experiencing communication issues while testing between two Netty socket applications (the main application, and the integration test application), where I've been receiving an unusual amount of partial messages.
One pattern being noticed, is that the first message being sent from the test application (application executes sending the message from outside the pipeline, using a sharable handler) tends to always be partial. This is also noticed during times of latency, where another issue occurs.
The other issue is that when a partial message is receive at times, the decoder seems to be trapped in a loop, where it continues try to read the partial message indefinitely. I have a unit test to simulate the partial message using EmbeddedChannel, but the unit test is not replicating what I am seeing during the integration test.
The main application is using the following pipeline:
ch.pipeline().addLast(<HeaderTrailerFrameDecoder>, <NettyMessageDecoder>, <HeaderTrailerFrameEncoder>, <NettyMessageEncoder>);
ch.pipeline().addLast(<IdleStateHandler>,<EventHandler>, <ChannelHandler>);
where:
HeaderTrailerFrameDecoder - Removes single-byte frame from beginning / end of packet
NettyMessageDecoder - Converts message from ByteBuf to domain object
HeaderTrailerFrameEncoder - Appends frame to message packets
NettyMessageEncoder - Converts message from domain object to ByteBuf
IdleStateHandler - Netty class, detects stale / idle connections
EventHandler - Sharable handler to send messages from outside of the pipeline
ChannelHandler - Main handler for all business logic
I'm thinking the problem could be with all of the encoders / decoders, and not releasing my ByteBuf objects perhaps? I'm only seeing the issue from the first decoder, the HeaderTrailerFrameDecoder, so I'll provide a snippet below. For every connection, the first message coming through from a series of messages being sent by the test application, first produces the log msg="could not find trailer".
#Slf4j // Lombok logging
public class HeaderTrailerFrameDecoder extends ByteToMessageDecoder {
private final byte header;
private final byte trailer;
HeaderTrailerFrameDecoder(byte header, byte trailer) {
this.header = header;
this.trailer = trailer;
}
#Override
protected void decode(
final ChannelHandlerContext ctx,
final ByteBuf buf,
final List<Object> out) {
log.trace("msg=\"decoding message with header and trailer\", buf={}", buf);
// Find header
int headerIndex = buf.forEachByte(value -> value != header);
if (headerIndex < 0) {
log.error("msg=\"could not find header\", payload=\"{}\"", payload);
buf.skipBytes(buf.readableBytes());
return;
}
int beforeHeaderLen = headerIndex - buf.readerIndex();
buf.skipBytes(beforeHeaderLen + 1);
// Find trailer
int trailerIndex = buf.forEachByte(value -> value != trailer);
if (trailerIndex < 0) {
String payload = debug(buf);
log.error("msg=\"could not find trailer\"");
buf.resetReaderIndex();
return;
}
int insideFrameLen = trailerIndex - buf.readerIndex();
ByteBuf frame = buf.readBytes(insideFrameLen);
buf.skipBytes(1);
// Pass message through
out.add(frame);
}
}

Importing csv data from Storage to Cloud SQL not working - status always "pending"

I am new to java (I have experience with C# though)
Sadly, I inherited a terrible project (the code is terrible) and what I need to accomplish is to import some csv files into Cloud SQL
So there's a WS which runs this task, apparently the dev followed this guide to import data. But it is not working. Here's the code (Essential parts, actually it is longer and more ugly)
InstancesImportRequest requestBody = new InstancesImportRequest();
ImportContext ic = new ImportContext();
ic.setKind("sql#importContext");
ic.setFileType("csv");
ic.setUri(bucketPath);
ic.setDatabase(CLOUD_SQL_DATABASE);
CsvImportOptions csv = new CsvImportOptions();
csv.setTable(tablename);
List<String> list = new ArrayList<String>();
// here there is some code that populates the list with the columns
csv.setColumns(list);
ic.setCsvImportOptions(csv);
requestBody.setImportContext(ic);
SQLAdmin sqlAdminService = createSqlAdminService();
SQLAdmin.Instances.SQLAdminImport request = sqlAdminService.instances().sqladminImport(project, instance, requestBody);
Operation response = request.execute();
System.out.println("Executed : Going to sleep.>"+response.getStatus());
int c = 1;
while(!response.getStatus().equalsIgnoreCase("Done")){
Thread.sleep(10000);
System.out.println("sleeped enough >"+response.getStatus());
c++;
if(c==50){
System.out.println("timeout?");
break;
}
}
public static SQLAdmin createSqlAdminService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new SQLAdmin.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-SQLAdminSample/0.1")
.build();
}
I am not quite sure how response should be treated, it seems it is an async request. Either way, I always get status Pending; it seems it is not even start to executing.
Of course it ends timing out. What is wrong here, why the requests never starts ? I couldn't find any actual example on the internet about using this java sdk to import files, except the link I gave above
Well, the thing is that the response object is static, so it will always return "Pending" as the initial status since it is a string in the object - it is not actually being updated.
To get the actual status, you have to requested it to google using the sdk. I did something like this (it will be better to use a smaller sleep time, and make it grow as you try more times)
SQLAdmin.Instances.SQLAdminImport request = sqlAdminService.instances().sqladminImport(CLOUD_PROJECT, CLOUD_SQL_INSTANCE, requestBody);
// execution of our import request
Operation response = request.execute();
int tried = 0;
Operation statusOperation;
do {
// sleep one minute
Thread.sleep(60000);
// here we are requesting the status of our operation. Name is actually the unique identifier
Get requestStatus = sqlAdminService.operations().get(CLOUD_PROJECT, response.getName());
statusOperation = requestStatus.execute();
tried++;
System.out.println("status is: " + statusOperation.getStatus());
} while(!statusOperation.getStatus().equalsIgnoreCase("DONE") && tried < 10);
if (!statusOperation.getStatus().equalsIgnoreCase("DONE")) {
throw new Exception("import failed: Timeout");
}

Get ClusterName of MQ Queue using Java

I'm building a java application that connects to a MQQueueManager and extracts information about queues. I'm able to get data like QueueType, MaximumMessageLength and more. However, I also want the name of the cluster the queue might be in. There is no function that comes with the MQQueue that gives me this information. After searching the internet I found several things pointing in this direction, but no examples.
A part of my function that gives me the MaximumDepth is:
queueManager = makeConnection(host, portNo, qMgr, channelName);
queue = queueManager.accessQueue(queueName, CMQC.MQOO_INQUIRE);
maxQueueDepth = queue.getMaximumDepth();
(makeConnection is not shown here, it is the function that makes the actual connection to the QueueManager; I also left out the try/catch/finally for less clutter)
How do I get ClusterName and perhaps other data, that doesn't have a function like queue.getMaximumDepth()?
There are two ways to get information about a queue.
The API Inquire call gets operational status of a queue. This includes things like the name the MQOpen call resolved to or the depth if the queue is local. Much of the q.inquire functionality has been superseded with getter and setter functions on the queue. If you are not using the v8.0 client with the latest functionality, you are highly advised to upgrade. It can access all versions of QMgr.
The following code is from Getting and setting attribute values in WebSphere MQ classes for Java
// inquire on a queue
final static int MQIA_DEF_PRIORITY = 6;
final static int MQCA_Q_DESC = 2013;
final static int MQ_Q_DESC_LENGTH = 64;
int[] selectors = new int[2];
int[] intAttrs = new int[1];
byte[] charAttrs = new byte[MQ_Q_DESC_LENGTH]
selectors[0] = MQIA_DEF_PRIORITY;
selectors[1] = MQCA_Q_DESC;
queue.inquire(selectors,intAttrs,charAttrs);
System.out.println("Default Priority = " + intAttrs[0]);
System.out.println("Description : " + new String(charAttrs,0));
For things that are not part of the API Inquire call, a PCF command is needed. Programmable Command Format, commonly abbreviated as PCF, is a message format used to pass messages to the command queue and for reading messages from the command queue, event queues and others.
To use a PCF command the calling application must be authorized with +put on SYSTEM.ADMIN.COMMAND.QUEUE and for +dsp on the object being inquired upon.
IBM provides sample code.
On Windows, please see: %MQ_FILE_PATH%\Tools\pcf\samples
In UNIX flavors, please see: /opt/mqm/samp/pcf/samples
The locations may vary depending on where MQ was installed.
Please see: Handling PCF messages with IBM MQ classes for Java. The following snippet is from the PCF_DisplayActiveLocalQueues.java sample program.
public static void DisplayActiveLocalQueues(PCF_CommonMethods pcfCM) throws PCFException,
MQDataException, IOException {
// Create the PCF message type for the inquire.
PCFMessage pcfCmd = new PCFMessage(MQConstants.MQCMD_INQUIRE_Q);
// Add the inquire rules.
// Queue name = wildcard.
pcfCmd.addParameter(MQConstants.MQCA_Q_NAME, "*");
// Queue type = LOCAL.
pcfCmd.addParameter(MQConstants.MQIA_Q_TYPE, MQConstants.MQQT_LOCAL);
// Queue depth filter = "WHERE depth > 0".
pcfCmd.addFilterParameter(MQConstants.MQIA_CURRENT_Q_DEPTH, MQConstants.MQCFOP_GREATER, 0);
// Execute the command. The returned object is an array of PCF messages.
PCFMessage[] pcfResponse = pcfCM.agent.send(pcfCmd);
// For each returned message, extract the message from the array and display the
// required information.
System.out.println("+-----+------------------------------------------------+-----+");
System.out.println("|Index| Queue Name |Depth|");
System.out.println("+-----+------------------------------------------------+-----+");
for (int index = 0; index < pcfResponse.length; index++) {
PCFMessage response = pcfResponse[index];
System.out.println("|"
+ (index + pcfCM.padding).substring(0, 5)
+ "|"
+ (response.getParameterValue(MQConstants.MQCA_Q_NAME) + pcfCM.padding).substring(0, 48)
+ "|"
+ (response.getParameterValue(MQConstants.MQIA_CURRENT_Q_DEPTH) + pcfCM.padding)
.substring(0, 5) + "|");
}
System.out.println("+-----+------------------------------------------------+-----+");
return;
}
}
After more research I finally found what I was looking for.
This example of IBM: Getting and setting attribute values in WebSphere MQ classes helped me to set up the inquiry.
The necessary values I found in this list: Constant Field Values.
I also needed to expand the openOptionsArg of accessQueue(), else cluster queues cannot be inquired.
Final result:
(without makeConnection())
public class QueueManagerServices {
final static int MQOO_INQUIRE_TOTAL = CMQC.MQOO_FAIL_IF_QUIESCING | CMQC.MQOO_INPUT_SHARED | CMQC.MQOO_INQUIRE;
MQQueueManager queueManager = null;
String cluster = null;
MQQueue queue = null;
public String getcluster(String host, int portNo, String qMgr, String channelName){
try{
queueManager = makeConnection(host, portNo, qMgr, channelName);
queue = queueManager.accessQueue(queueName, MQOO_INQUIRE_TOTAL);
int MQCA_CLUSTER_NAME = 2029;
int MQ_CLUSTER_NAME_LENGTH = 48;
int[] selectors = new int[1];
int[] intAttrs = new int[1];
byte[] charAttrs = new byte[MQ_CLUSTER_NAME_LENGTH];
selectors[0] = MQCA_CLUSTER_NAME;
queue.inquire(selectors, intAttrs, charAttrs);
cluster = new String (charAttrs);
} catch (MQException e) {
System.out.println(e);
} finally {
if (queue != null){
queue.close();
}
if (queueManager != null){
queueManager.disconnect();
}
}
return cluster;
}
}

How to follow Single Responsibility principle in my HttpClient executor?

I am using RestTemplate as my HttpClient to execute URL and the server will return back a json string as the response. Customer will call this library by passing DataKey object which has userId in it.
Using the given userId, I will find out what are the machines that I can hit to get the data and then store those machines in a LinkedList, so that I can execute them sequentially.
After that I will check whether the first hostname is in block list or not. If it is not there in the block list, then I will make a URL with the first hostname in the list and execute it and if the response is successful then return back the response. But let's say if that first hostname is in the block list, then I will try to get the second hostname in the list and make the url and execute it, so basically, first find the hostname which is not in block list before making the URL.
Now, let's say if we selected first hostname which was not in the block list and executed the URL and somehow server was down or not responding, then I will execute the second hostname in the list and keep doing this until you get a successful response. But make sure they were not in the block list as well so we need to follow above point.
If all servers are down or in block list, then I can simply log and return the error that service is unavailable.
Below is my DataClient class which will be called by customer and they will pass DataKey object to getData method.
public class DataClient implements Client {
private RestTemplate restTemplate = new RestTemplate(new HttpComponentsClientHttpRequestFactory());
private ExecutorService service = Executors.newFixedThreadPool(15);
public Future<DataResponse> getData(DataKey key) {
DataExecutorTask task = new DataExecutorTask(key, restTemplate);
Future<DataResponse> future = service.submit(task);
return future;
}
}
Below is my DataExecutorTask class:
public class DataExecutorTask implements Callable<DataResponse> {
private DataKey key;
private RestTemplate restTemplate;
public DataExecutorTask(DataKey key, RestTemplate restTemplate) {
this.restTemplate = restTemplate;
this.key = key;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
ResponseEntity<String> response = null;
MappingsHolder mappings = ShardMappings.getMappings(key.getTypeOfFlow());
// given a userId, find all the hostnames
// it can also have four hostname or one hostname or six hostname as well in the list
List<String> hostnames = mappings.getListOfHostnames(key.getUserId());
for (String hostname : hostnames) {
// If host name is null or host name is in local block list, skip sending request to this host
if (ClientUtils.isEmpty(hostname) || ShardMappings.isBlockHost(hostname)) {
continue;
}
try {
String url = generateURL(hostname);
response = restTemplate.exchange(url, HttpMethod.GET, key.getEntity(), String.class);
if (response.getStatusCode() == HttpStatus.NO_CONTENT) {
dataResponse = new DataResponse(response.getBody(), DataErrorEnum.NO_CONTENT,
DataStatusEnum.SUCCESS);
} else {
dataResponse = new DataResponse(response.getBody(), DataErrorEnum.OK,
DataStatusEnum.SUCCESS);
}
break;
// below codes are duplicated looks like
} catch (HttpClientErrorException ex) {
HttpStatusCodeException httpException = (HttpStatusCodeException) ex;
DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException);
String errorMessage = httpException.getResponseBodyAsString();
dataResponse = new DataResponse(errorMessage, error, DataStatusEnum.ERROR);
return dataResponse;
} catch (HttpServerErrorException ex) {
HttpStatusCodeException httpException = (HttpStatusCodeException) ex;
DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException);
String errorMessage = httpException.getResponseBodyAsString();
dataResponse = new DataResponse(errorMessage, error, DataStatusEnum.ERROR);
return dataResponse;
} catch (RestClientException ex) {
// if it comes here, then it means some of the servers are down so adding it into block list
ShardMappings.blockHost(hostname);
}
}
if (ClientUtils.isEmpty(hostnames)) {
dataResponse = new DataResponse(null, DataErrorEnum.PERT_ERROR, DataStatusEnum.ERROR);
} else if (response == null) { // either all the servers are down or all the servers were in block list
dataResponse = new DataResponse(null, DataErrorEnum.SERVICE_UNAVAILABLE, DataStatusEnum.ERROR);
}
return dataResponse;
}
}
My block list keeps-on getting updated from another background thread every 1 minute. If any server is down and not responding, then I need to block that server by using this -
ShardMappings.blockHost(hostname);
And to check whether any server is in block list or not, I use this -
ShardMappings.isBlockHost(hostname);
I am returning SERVICE_UNAVAILABLE if servers are down or in block list,on the basis of response == null check, not sure whether it's a right approach or not.
I am not following Single Responsibility Principle here I guess at all.
Can anyone provide an example what is the best way to use SRP principle here.
After thinking a lot, I was able to extract hosts class like given below but not sure what is the best way to use this in my above DataExecutorTask class.
public class Hosts {
private final LinkedList<String> hostsnames = new LinkedList<String>();
public Hosts(final List<String> hosts) {
checkNotNull(hosts, "hosts cannot be null");
this.hostsnames.addAll(hosts);
}
public Optional<String> getNextAvailableHostname() {
while (!hostsnames.isEmpty()) {
String firstHostname = hostsnames.removeFirst();
if (!ClientUtils.isEmpty(firstHostname) && !ShardMappings.isBlockHost(firstHostname)) {
return Optional.of(firstHostname);
}
}
return Optional.absent();
}
public boolean isEmpty() {
return hostsnames.isEmpty();
}
}
Your concern is valid. First, let's see what the original data executor do:
First, it is getting list of hostnames
Next, it loops through every hostnames that do the following things:
It checks whether the hostname is valid to send request.
If not valid: skip.
Else continue.
Generate the URL based on hostname
Send the request
Translate the request response to domain response
Handle exceptions
If the hostnames is empty, generate an empty response
Return response
Now, what can we do to follow SRP? As I can see, we can group those operations into some groups. What I can see is, these operations can be split into:
HostnameValidator: checks whether the hostname is valid to send request
--------------
HostnameRequestSender: Generate the URL
Send the request
--------------
HttpToDataResponse: Translate the request response to domain response
--------------
HostnameExceptionHandler: Handle exceptions
That is, one approach to de-couple your operations and to follow SRP. There is also other approach, for example to simplify your operations:
First, it is getting list of hostnames
If the hostnames is empty, generate an empty response
Next, it loops through every hostnames that do the following things:
It checks whether the hostname is valid to send request
If not valid: remove hostname
Else: Generate the URL based on hostname
Next, it loops through every valid hostnames that do the following things:
Send the request
Translate the request response to domain response
Handle exceptions
Return response
Then it can also be split into:
HostnameValidator: checks whether the hostname is valid to send request
--------------
ValidHostnameData: Getting list of hostnames
Loops through every hostnames that do the following things:
Checks whether the hostname is valid to send request
If not valid: remove hostname
Else: Generate the URL based on hostname
--------------
HostnameRequestSender: Send the request
--------------
HttpToDataResponse: Translate the request response to domain response
--------------
HostnameExceptionHandler: Handle exceptions
Of course there are also other way to do it. And I leave the implementation details blank because there is many way to implement it.

JavaMail IMAP over SSL quite slow - Bulk fetching multiple messages

I am currently trying to use JavaMail to get emails from IMAP servers (Gmail and others). Basically, my code works: I indeed can get the headers, body contents and so on. My problem is the following: when working on an IMAP server (no SSL), it basically takes 1-2ms to process a message. When I go on an IMAPS server (hence with SSL, such as Gmail) I reach around 250m/message. I ONLY measure the time when processing the messages (the connection, handshake and such are NOT taken into account).
I know that since this is SSL, the data is encrypted. However, the time for decryption should not be that important, should it?
I have tried setting a higher ServerCacheSize value, a higher connectionpoolsize, but am seriously running out of ideas. Anyone confronted with this problem? Solved it one might hope?
My fear is that the JavaMail API uses a different connection each time it fetches a mail from the IMAPS server (involving the overhead for handshake...). If so, is there a way to override this behavior?
Here is my code (although quite standard) called from the Main() class:
public static int connectTest(String SSL, String user, String pwd, String host) throws IOException,
ProtocolException,
GeneralSecurityException {
Properties props = System.getProperties();
props.setProperty("mail.store.protocol", SSL);
props.setProperty("mail.imaps.ssl.trust", host);
props.setProperty("mail.imaps.connectionpoolsize", "10");
try {
Session session = Session.getDefaultInstance(props, null);
// session.setDebug(true);
Store store = session.getStore(SSL);
store.connect(host, user, pwd);
Folder inbox = store.getFolder("INBOX");
inbox.open(Folder.READ_ONLY);
int numMess = inbox.getMessageCount();
Message[] messages = inbox.getMessages();
for (Message m : messages) {
m.getAllHeaders();
m.getContent();
}
inbox.close(false);
store.close();
return numMess;
} catch (MessagingException e) {
e.printStackTrace();
System.exit(2);
}
return 0;
}
Thanks in advance.
after a lot of work, and assistance from the people at JavaMail, the source of this "slowness" is from the FETCH behavior in the API. Indeed, as pjaol said, we return to the server each time we need info (a header, or message content) for a message.
If FetchProfile allows us to bulk fetch header information, or flags, for many messages, getting contents of multiple messages is NOT directly possible.
Luckily, we can write our own IMAP command to avoid this "limitation" (it was done this way to avoid out of memory errors: fetching every mail in memory in one command can be quite heavy).
Here is my code:
import com.sun.mail.iap.Argument;
import com.sun.mail.iap.ProtocolException;
import com.sun.mail.iap.Response;
import com.sun.mail.imap.IMAPFolder;
import com.sun.mail.imap.protocol.BODY;
import com.sun.mail.imap.protocol.FetchResponse;
import com.sun.mail.imap.protocol.IMAPProtocol;
import com.sun.mail.imap.protocol.UID;
public class CustomProtocolCommand implements IMAPFolder.ProtocolCommand {
/** Index on server of first mail to fetch **/
int start;
/** Index on server of last mail to fetch **/
int end;
public CustomProtocolCommand(int start, int end) {
this.start = start;
this.end = end;
}
#Override
public Object doCommand(IMAPProtocol protocol) throws ProtocolException {
Argument args = new Argument();
args.writeString(Integer.toString(start) + ":" + Integer.toString(end));
args.writeString("BODY[]");
Response[] r = protocol.command("FETCH", args);
Response response = r[r.length - 1];
if (response.isOK()) {
Properties props = new Properties();
props.setProperty("mail.store.protocol", "imap");
props.setProperty("mail.mime.base64.ignoreerrors", "true");
props.setProperty("mail.imap.partialfetch", "false");
props.setProperty("mail.imaps.partialfetch", "false");
Session session = Session.getInstance(props, null);
FetchResponse fetch;
BODY body;
MimeMessage mm;
ByteArrayInputStream is = null;
// last response is only result summary: not contents
for (int i = 0; i < r.length - 1; i++) {
if (r[i] instanceof IMAPResponse) {
fetch = (FetchResponse) r[i];
body = (BODY) fetch.getItem(0);
is = body.getByteArrayInputStream();
try {
mm = new MimeMessage(session, is);
Contents.getContents(mm, i);
} catch (MessagingException e) {
e.printStackTrace();
}
}
}
}
// dispatch remaining untagged responses
protocol.notifyResponseHandlers(r);
protocol.handleResult(response);
return "" + (r.length - 1);
}
}
the getContents(MimeMessage mm, int i) function is a classic function that recursively prints the contents of the message to a file (many examples available on the net).
To avoid out of memory errors, I simply set a maxDocs and maxSize limit (this has been done arbitrarily and can probably be improved!) used as follows:
public int efficientGetContents(IMAPFolder inbox, Message[] messages)
throws MessagingException {
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.FLAGS);
fp.add(FetchProfile.Item.ENVELOPE);
inbox.fetch(messages, fp);
int index = 0;
int nbMessages = messages.length;
final int maxDoc = 5000;
final long maxSize = 100000000; // 100Mo
// Message numbers limit to fetch
int start;
int end;
while (index < nbMessages) {
start = messages[index].getMessageNumber();
int docs = 0;
int totalSize = 0;
boolean noskip = true; // There are no jumps in the message numbers
// list
boolean notend = true;
// Until we reach one of the limits
while (docs < maxDoc && totalSize < maxSize && noskip && notend) {
docs++;
totalSize += messages[index].getSize();
index++;
if (notend = (index < nbMessages)) {
noskip = (messages[index - 1].getMessageNumber() + 1 == messages[index]
.getMessageNumber());
}
}
end = messages[index - 1].getMessageNumber();
inbox.doCommand(new CustomProtocolCommand(start, end));
System.out.println("Fetching contents for " + start + ":" + end);
System.out.println("Size fetched = " + (totalSize / 1000000)
+ " Mo");
}
return nbMessages;
}
Do not that here I am using message numbers, which is unstable (these change if messages are erased from the server). A better method would be to use UIDs! Then you would change the command from FETCH to UID FETCH.
Hope this helps out!
You need to add a FetchProfile to the inbox before you iterate through the messages.
Message is a lazy loading object, it will return to the server for each message and for each
field that doesn't get provided with the default profile.
e.g.
for (Message message: messages) {
message.getSubject(); //-> goes to the imap server to fetch the subject line
}
If you want to display like an inbox listing of say just From, Subject, Sent, Attachement etc.. you would use something like the following
inbox.open(Folder.READ_ONLY);
Message[] messages = inbox.getMessages(start + 1, total);
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.ENVELOPE);
fp.add(FetchProfileItem.FLAGS);
fp.add(FetchProfileItem.CONTENT_INFO);
fp.add("X-mailer");
inbox.fetch(messages, fp); // Load the profile of the messages in 1 fetch.
for (Message message: messages) {
message.getSubject(); //Subject is already local, no additional fetch required
}
Hope that helps.
The total time includes the time required in cryptographic operations. The cryptographic operations need a random seeder. There are different random seeding implementations which provide random bits for use in the cryptography. By default, Java uses /dev/urandom and this is specified in your java.security as below:
securerandom.source=file:/dev/urandom
On Windows, java uses Microsoft CryptoAPI seed functionality which usually has no problems. However, on unix and linux, Java, by default uses /dev/random for random seeding. And read operations on /dev/random sometimes block and takes long time to complete. If you are using the *nix platforms then the time spent in this would get counted in the overall time.
Since, I dont know what platform you are using, I can't for sure say that this could be your problem. But if you are, then this could be one of reasons why your operations are taking long time. One of the solution to this could be to use /dev/urandom instead of /dev/random as your random seeder, which does not block. This can be specified with the system property "java.security.egd". For example,
-Djava.security.egd=file:/dev/urandom
Specifying this system property will override the securerandom.source setting in your java.security file. You can give it a try. Hope it helps.

Categories

Resources