Kubernetes JavaClient API Pod readiness - java

I try to create a POD with an helm file on my Kubernetes Cluster and wait until the pod is ready.
For this I use the Kubernetes Java Client API. My code is:
public static String startWdPod(File specFile, String namespace, String imageName, String controllerAddress)
throws IOException, ApiException, InterruptedException {
LOGGER.info("Loading pod spec from {}", specFile);
V1Pod preparedPod = (V1Pod) Yaml.load(specFile);
// Dynamic settings
String name = WD_NAME_BASE + UUID.randomUUID();
Objects.requireNonNull(preparedPod.getMetadata()).setName(name);
V1Container container = Objects.requireNonNull(preparedPod.getSpec()).getContainers().get(0);
container.setImage(imageName);
Objects.requireNonNull(container.getEnv()).add(new V1EnvVar().name("WD_CONTROLLER_ADDRESS").value(controllerAddress));
LOGGER.info("Creating pod {} with image {}", name, imageName);
V1Pod pod = getApi().createNamespacedPod(namespace, preparedPod, null, null, null);
podReference = pod;
int count = 0;
LOGGER.info("POD Phase1 {}", pod.getStatus().getPhase());
while (count < 50) {
if (pod.getStatus().getPhase().equals("Ready")) {
LOGGER.info("Found Ready");
count = 50;
}
LOGGER.info("POD not ready, wait 1 second");
LOGGER.info("POD Phase loop {}", pod.getStatus().getPhase());
TimeUnit.SECONDS.sleep(1);
count++;
}
return Objects.requireNonNull(pod.getMetadata()).getName();
}
The Pod is created well and becomes ready after some time. But in my Loop the Status will never get other than "Pending". I expect it will change some time to "Ready". Is there anything I think wrong about?

Related

How to check gcp pubsub empty/inactive subscription

I have an application that subscribes to a topic in GCP and when there is some messages over there it downloads them and sends them to a queue on ActiveMQ.
In order to make this process fast, I am using executorService and launching multiple threads for sending messages to activeMQ. Since this the subscription is supposed to be an ongoing task I am putting the code in a while(true) loop, and hence I can't shutdown the executorService in a normal fashion, as I will be creating and shutting down the executor service in every loop.
I am searching for an elegant way to shutdown the executorService when the subscription is empty (no data in the topic) for like 2 or 3 minutes or some inactivity window. and then of course it starts again when there is some new data.
The following is my idea which I don't like, which is just a counter that I am incrementing when the subscription retrieves no data.
I am looking for a more elegant way of doing that.
#Service
#Slf4j
public class PubSubSubscriberService {
private static final int EMPTY_SUBSCRIPTION_COUNTER = 4;
private static final Logger businessLogger = LoggerFactory.getLogger("BusinessLogger");
private Queue<PubsubMessage> messages = new ConcurrentLinkedQueue<>();
public void pullMessagesAndSendToBroker(CompositeConfigurationElement cce) {
var patchSize = cce.getSubscriber().getPatchSize();
var nThreads = cce.getSubscriber().getSendingParallelThreads();
var scheduledTasks = 0;
var subscribeCounter = 0;
ThreadPoolExecutor threadPoolExecutor = null;
while (true) {
try {
if (subscribeCounter < EMPTY_SUBSCRIPTION_COUNTER) {
log.info("Creating Executor Service for uploading to broker with a thread pool of Size: " + nThreads);
threadPoolExecutor = getThreadPoolExecutor(nThreads);
}
var subscriber = this.getSubscriber(cce);
this.startSubscriber(subscriber, cce);
this.checkActivity(threadPoolExecutor, subscribeCounter++);
// send patches of {{ messagesPerIteration }}
while (this.messages.size() > patchSize) {
if (poolIsReady(threadPoolExecutor, nThreads)) {
UploadTask task = new UploadTask(this.messages, cce, cf, patchSize);
threadPoolExecutor.submit(task);
scheduledTasks ++;
}
subscribeCounter = 0;
}
// send the rest
if (this.messages.size() > 0) {
UploadTask task = new UploadTask(this.messages, cce, cf, patchSize);
threadPoolExecutor.submit(task);
scheduledTasks ++;
subscribeCounter = 0;
}
if (scheduledTasks > 0) {
businessLogger.info("Scheduled " + scheduledTasks + " upload tasks of size upto: " + patchSize + ", preparing to start subscribing for 30 more sec") ;
scheduledTasks = 0;
}
} catch ( Exception e) {
e.printStackTrace();
businessLogger.error(e.getMessage());
}
}
Your pool take few space and memory and consume almost no CPU when it's not used. Set a max limit to your Pool capacity and use it with trying to downscale it. If you have too much messages to process, the task are queued waiting a free executor pool to complete the task.
If you have scalability up and down concerne, you design could be reviewed. Instead of executorPool internal to the pod, you could trigger an event in your cluster and process them in parallel, on other pods. These pods will be able to scale up and down according to the traffic (have a look to Knative)

Using AmazonSQS sendMessage over multiple threads causes it to run slower

I've got an app that sends simple SQS messages to multiple queues. Previously, this sending happened serially, but now that we've got more queues we need to send to, I decided to parallelize it by doing all the sending in a thread pool (up to 10 threads).
However, I've noticed that sqs.sendMessage latency seems to increase when I throw more threads at the job!
I've created a sample program below to reproduce the problem (Note that numIterations is just to get more data, and this is just a simplified version of the code for demo purposes).
Running on EC2 instance in the same region and using 7 queues, I'm typically getting average results around 12-15ms with 1 thread, and 21-25ms with 7 threads - nearly double the latency!
Even running from my laptop remotely (when creating this demo), I'm getting average latency of ~90ms with 1 thread and ~120ms with 7 threads.
public static void main(String[] args) throws Exception {
AWSCredentialsProvider creds = new AWSStaticCredentialsProvider(new BasicAWSCredentials(A, B));
final int numThreads = 7;
final int numQueues = 7;
final int numIterations = 100;
final long sleepMs = 10000;
AmazonSQSClient sqs = new AmazonSQSClient(creds);
List<String> queueUrls = new ArrayList<>();
for (int i=0; i<numQueues; i++) {
queueUrls.add(sqs.getQueueUrl("testThreading-" + i).getQueueUrl());
}
Queue<Long> resultQueue = new ConcurrentLinkedQueue<>();
sqs.addRequestHandler(new MyRequestHandler(resultQueue));
runIterations(sqs, queueUrls, numThreads, numIterations, sleepMs);
System.out.println("Average: " + resultQueue.stream().mapToLong(Long::longValue).average().getAsDouble());
System.exit(0);
}
private static void runIterations(AmazonSQS sqs, List<String> queueUrls, int threadPoolSize, int numIterations, long sleepMs) throws Exception {
ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
List<Future<?>> futures = new ArrayList<>();
for (int i=0; i<numIterations; i++) {
for (String queueUrl : queueUrls) {
final String message = String.valueOf(i);
futures.add(executor.submit(() -> sendMessage(sqs, queueUrl, message)));
}
Thread.sleep(sleepMs);
}
for (Future<?> f : futures) {
f.get();
}
}
private static void sendMessage(AmazonSQS sqs, String queueUrl, String messageBody) {
final SendMessageRequest request = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(messageBody);
sqs.sendMessage(request);
}
// Use RequestHandler2 to get accurate timing metrics
private static class MyRequestHandler extends RequestHandler2 {
private final Queue<Long> resultQueue;
public MyRequestHandler(Queue<Long> resultQueue) {
this.resultQueue = resultQueue;
}
public void afterResponse(Request<?> request, Response<?> response) {
TimingInfo timingInfo = request.getAWSRequestMetrics().getTimingInfo();
Long start = timingInfo.getStartEpochTimeMilliIfKnown();
Long end = timingInfo.getEndEpochTimeMilliIfKnown();
if (start != null && end != null) {
long elapsed = end-start;
resultQueue.add(elapsed);
}
}
}
I'm sure this is some weird client configuration issue, but the default ClientConfiguration should be able to handle 50 concurrent connections.
Any suggestions?
Update: It's looking like the key to this problem is something I left out of the original simplified version - there is a delay between batches of messages being sent (relating to doing processing). The latency issue isn't there if the delay is ~2s, but it is an issue when the delay between batches is ~10s. I've tried different values for ClientConfiguration.validateAfterInactivityMillis with no effect.

Get ClusterName of MQ Queue using Java

I'm building a java application that connects to a MQQueueManager and extracts information about queues. I'm able to get data like QueueType, MaximumMessageLength and more. However, I also want the name of the cluster the queue might be in. There is no function that comes with the MQQueue that gives me this information. After searching the internet I found several things pointing in this direction, but no examples.
A part of my function that gives me the MaximumDepth is:
queueManager = makeConnection(host, portNo, qMgr, channelName);
queue = queueManager.accessQueue(queueName, CMQC.MQOO_INQUIRE);
maxQueueDepth = queue.getMaximumDepth();
(makeConnection is not shown here, it is the function that makes the actual connection to the QueueManager; I also left out the try/catch/finally for less clutter)
How do I get ClusterName and perhaps other data, that doesn't have a function like queue.getMaximumDepth()?
There are two ways to get information about a queue.
The API Inquire call gets operational status of a queue. This includes things like the name the MQOpen call resolved to or the depth if the queue is local. Much of the q.inquire functionality has been superseded with getter and setter functions on the queue. If you are not using the v8.0 client with the latest functionality, you are highly advised to upgrade. It can access all versions of QMgr.
The following code is from Getting and setting attribute values in WebSphere MQ classes for Java
// inquire on a queue
final static int MQIA_DEF_PRIORITY = 6;
final static int MQCA_Q_DESC = 2013;
final static int MQ_Q_DESC_LENGTH = 64;
int[] selectors = new int[2];
int[] intAttrs = new int[1];
byte[] charAttrs = new byte[MQ_Q_DESC_LENGTH]
selectors[0] = MQIA_DEF_PRIORITY;
selectors[1] = MQCA_Q_DESC;
queue.inquire(selectors,intAttrs,charAttrs);
System.out.println("Default Priority = " + intAttrs[0]);
System.out.println("Description : " + new String(charAttrs,0));
For things that are not part of the API Inquire call, a PCF command is needed. Programmable Command Format, commonly abbreviated as PCF, is a message format used to pass messages to the command queue and for reading messages from the command queue, event queues and others.
To use a PCF command the calling application must be authorized with +put on SYSTEM.ADMIN.COMMAND.QUEUE and for +dsp on the object being inquired upon.
IBM provides sample code.
On Windows, please see: %MQ_FILE_PATH%\Tools\pcf\samples
In UNIX flavors, please see: /opt/mqm/samp/pcf/samples
The locations may vary depending on where MQ was installed.
Please see: Handling PCF messages with IBM MQ classes for Java. The following snippet is from the PCF_DisplayActiveLocalQueues.java sample program.
public static void DisplayActiveLocalQueues(PCF_CommonMethods pcfCM) throws PCFException,
MQDataException, IOException {
// Create the PCF message type for the inquire.
PCFMessage pcfCmd = new PCFMessage(MQConstants.MQCMD_INQUIRE_Q);
// Add the inquire rules.
// Queue name = wildcard.
pcfCmd.addParameter(MQConstants.MQCA_Q_NAME, "*");
// Queue type = LOCAL.
pcfCmd.addParameter(MQConstants.MQIA_Q_TYPE, MQConstants.MQQT_LOCAL);
// Queue depth filter = "WHERE depth > 0".
pcfCmd.addFilterParameter(MQConstants.MQIA_CURRENT_Q_DEPTH, MQConstants.MQCFOP_GREATER, 0);
// Execute the command. The returned object is an array of PCF messages.
PCFMessage[] pcfResponse = pcfCM.agent.send(pcfCmd);
// For each returned message, extract the message from the array and display the
// required information.
System.out.println("+-----+------------------------------------------------+-----+");
System.out.println("|Index| Queue Name |Depth|");
System.out.println("+-----+------------------------------------------------+-----+");
for (int index = 0; index < pcfResponse.length; index++) {
PCFMessage response = pcfResponse[index];
System.out.println("|"
+ (index + pcfCM.padding).substring(0, 5)
+ "|"
+ (response.getParameterValue(MQConstants.MQCA_Q_NAME) + pcfCM.padding).substring(0, 48)
+ "|"
+ (response.getParameterValue(MQConstants.MQIA_CURRENT_Q_DEPTH) + pcfCM.padding)
.substring(0, 5) + "|");
}
System.out.println("+-----+------------------------------------------------+-----+");
return;
}
}
After more research I finally found what I was looking for.
This example of IBM: Getting and setting attribute values in WebSphere MQ classes helped me to set up the inquiry.
The necessary values I found in this list: Constant Field Values.
I also needed to expand the openOptionsArg of accessQueue(), else cluster queues cannot be inquired.
Final result:
(without makeConnection())
public class QueueManagerServices {
final static int MQOO_INQUIRE_TOTAL = CMQC.MQOO_FAIL_IF_QUIESCING | CMQC.MQOO_INPUT_SHARED | CMQC.MQOO_INQUIRE;
MQQueueManager queueManager = null;
String cluster = null;
MQQueue queue = null;
public String getcluster(String host, int portNo, String qMgr, String channelName){
try{
queueManager = makeConnection(host, portNo, qMgr, channelName);
queue = queueManager.accessQueue(queueName, MQOO_INQUIRE_TOTAL);
int MQCA_CLUSTER_NAME = 2029;
int MQ_CLUSTER_NAME_LENGTH = 48;
int[] selectors = new int[1];
int[] intAttrs = new int[1];
byte[] charAttrs = new byte[MQ_CLUSTER_NAME_LENGTH];
selectors[0] = MQCA_CLUSTER_NAME;
queue.inquire(selectors, intAttrs, charAttrs);
cluster = new String (charAttrs);
} catch (MQException e) {
System.out.println(e);
} finally {
if (queue != null){
queue.close();
}
if (queueManager != null){
queueManager.disconnect();
}
}
return cluster;
}
}

How can I create a node recursively using zookeeper client library on Java?

I know this question has been already asked and answered for a zookeeper using python. The answer was good, however, I want something more related with the code. I've already implemented a method to create a node, but I want to do it recursively. The structure for my nodes will be like this:
ZOOKEEPER
WEB SERVER
SERVER1
SERVER2
MODULE CONNECTED
DATABASE MODULE
COMPUTER1
COMPUTER2
SERVICE MODULE
COMPUTER3
SEARCH MODULE
COMPUTER4
I have something like:
Zookeeper zk = new Zookeeper(...);
public void createNodeRecursively(String type) {
final String node = "/" + type + "/" + info.getIP() + ":" + info.getPort(); // Correct line
if (zk.exists("/" + type, null) == null) {
Object ctx = new Object();
StringCallback cb = new StringCallback() {
public void processResult(int rc, String path,
Object ctx, String name) {
if (name.equals("/" + type))// just in case
try {
zk.create(node, info.getBytes(),
Ids.OPEN_ACL_UNSAFE,CreateMode.EPHEMERAL);
} catch (Exception e) {
e.printStackTrace();
}
}
};
zk.create("/" + type, info.getBytes(), Ids.OPEN_ACL_UNSAFE,
CreateMode.PERSISTENT, cb, ctx);
} else
zk.create(node, info.getBytes(), Ids.OPEN_ACL_UNSAFE,
CreateMode.EPHEMERAL);
}
}
As you can see I am using zk.create many times, so I want to make the method recursive in order to gain performance and have a better code, but I don't know how to start, I'll be very grateful if somebody can help me with this. Thank you very much in advance.
Zookeeper has useful properties:
Total order of (write) requests
Its asynchronous nature.
You can put on use that.
Simply issue whole tree as a bunch of asynchronous requests in correct order and then wait, until all of them successfully execute. Of course, you can ignore 'NodeExists' exceptions (but it is not good, due of the fact, that such errors will be written to logs).
I managed to achieve a better performance:
public void createNode(NodePath nodePath, NodeData nodeData, NodeRights nodeRights, NodeCreationHandler nodeCreationHandler) throws KeeperException, InterruptedException, ZookeeperCreationException {
if (zk == null) {
throw new ZookeeperCreationException("The zookeeper client has not been instanced.");
}
String targetPath = nodePath.getFullNodePath();
targetPath = targetPath.substring(1, targetPath.length());
byte[] serializedData = nodeData.serialize(new Object());
String[] array = targetPath.split(ICoordinationConstants.BASE_ROOT_SPTR);
String acum="";
for (int i = 0; i < array.length-1; i++) {
acum+=(ICoordinationConstants.BASE_ROOT_SPTR+array[i]);
if (zk.exists(acum, null) == null) {
zk.create(acum, serializedData, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
}
zk.create(acum+ICoordinationConstants.BASE_ROOT_SPTR+array[array.length-1], serializedData, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL);
}

JavaMail IMAP over SSL quite slow - Bulk fetching multiple messages

I am currently trying to use JavaMail to get emails from IMAP servers (Gmail and others). Basically, my code works: I indeed can get the headers, body contents and so on. My problem is the following: when working on an IMAP server (no SSL), it basically takes 1-2ms to process a message. When I go on an IMAPS server (hence with SSL, such as Gmail) I reach around 250m/message. I ONLY measure the time when processing the messages (the connection, handshake and such are NOT taken into account).
I know that since this is SSL, the data is encrypted. However, the time for decryption should not be that important, should it?
I have tried setting a higher ServerCacheSize value, a higher connectionpoolsize, but am seriously running out of ideas. Anyone confronted with this problem? Solved it one might hope?
My fear is that the JavaMail API uses a different connection each time it fetches a mail from the IMAPS server (involving the overhead for handshake...). If so, is there a way to override this behavior?
Here is my code (although quite standard) called from the Main() class:
public static int connectTest(String SSL, String user, String pwd, String host) throws IOException,
ProtocolException,
GeneralSecurityException {
Properties props = System.getProperties();
props.setProperty("mail.store.protocol", SSL);
props.setProperty("mail.imaps.ssl.trust", host);
props.setProperty("mail.imaps.connectionpoolsize", "10");
try {
Session session = Session.getDefaultInstance(props, null);
// session.setDebug(true);
Store store = session.getStore(SSL);
store.connect(host, user, pwd);
Folder inbox = store.getFolder("INBOX");
inbox.open(Folder.READ_ONLY);
int numMess = inbox.getMessageCount();
Message[] messages = inbox.getMessages();
for (Message m : messages) {
m.getAllHeaders();
m.getContent();
}
inbox.close(false);
store.close();
return numMess;
} catch (MessagingException e) {
e.printStackTrace();
System.exit(2);
}
return 0;
}
Thanks in advance.
after a lot of work, and assistance from the people at JavaMail, the source of this "slowness" is from the FETCH behavior in the API. Indeed, as pjaol said, we return to the server each time we need info (a header, or message content) for a message.
If FetchProfile allows us to bulk fetch header information, or flags, for many messages, getting contents of multiple messages is NOT directly possible.
Luckily, we can write our own IMAP command to avoid this "limitation" (it was done this way to avoid out of memory errors: fetching every mail in memory in one command can be quite heavy).
Here is my code:
import com.sun.mail.iap.Argument;
import com.sun.mail.iap.ProtocolException;
import com.sun.mail.iap.Response;
import com.sun.mail.imap.IMAPFolder;
import com.sun.mail.imap.protocol.BODY;
import com.sun.mail.imap.protocol.FetchResponse;
import com.sun.mail.imap.protocol.IMAPProtocol;
import com.sun.mail.imap.protocol.UID;
public class CustomProtocolCommand implements IMAPFolder.ProtocolCommand {
/** Index on server of first mail to fetch **/
int start;
/** Index on server of last mail to fetch **/
int end;
public CustomProtocolCommand(int start, int end) {
this.start = start;
this.end = end;
}
#Override
public Object doCommand(IMAPProtocol protocol) throws ProtocolException {
Argument args = new Argument();
args.writeString(Integer.toString(start) + ":" + Integer.toString(end));
args.writeString("BODY[]");
Response[] r = protocol.command("FETCH", args);
Response response = r[r.length - 1];
if (response.isOK()) {
Properties props = new Properties();
props.setProperty("mail.store.protocol", "imap");
props.setProperty("mail.mime.base64.ignoreerrors", "true");
props.setProperty("mail.imap.partialfetch", "false");
props.setProperty("mail.imaps.partialfetch", "false");
Session session = Session.getInstance(props, null);
FetchResponse fetch;
BODY body;
MimeMessage mm;
ByteArrayInputStream is = null;
// last response is only result summary: not contents
for (int i = 0; i < r.length - 1; i++) {
if (r[i] instanceof IMAPResponse) {
fetch = (FetchResponse) r[i];
body = (BODY) fetch.getItem(0);
is = body.getByteArrayInputStream();
try {
mm = new MimeMessage(session, is);
Contents.getContents(mm, i);
} catch (MessagingException e) {
e.printStackTrace();
}
}
}
}
// dispatch remaining untagged responses
protocol.notifyResponseHandlers(r);
protocol.handleResult(response);
return "" + (r.length - 1);
}
}
the getContents(MimeMessage mm, int i) function is a classic function that recursively prints the contents of the message to a file (many examples available on the net).
To avoid out of memory errors, I simply set a maxDocs and maxSize limit (this has been done arbitrarily and can probably be improved!) used as follows:
public int efficientGetContents(IMAPFolder inbox, Message[] messages)
throws MessagingException {
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.FLAGS);
fp.add(FetchProfile.Item.ENVELOPE);
inbox.fetch(messages, fp);
int index = 0;
int nbMessages = messages.length;
final int maxDoc = 5000;
final long maxSize = 100000000; // 100Mo
// Message numbers limit to fetch
int start;
int end;
while (index < nbMessages) {
start = messages[index].getMessageNumber();
int docs = 0;
int totalSize = 0;
boolean noskip = true; // There are no jumps in the message numbers
// list
boolean notend = true;
// Until we reach one of the limits
while (docs < maxDoc && totalSize < maxSize && noskip && notend) {
docs++;
totalSize += messages[index].getSize();
index++;
if (notend = (index < nbMessages)) {
noskip = (messages[index - 1].getMessageNumber() + 1 == messages[index]
.getMessageNumber());
}
}
end = messages[index - 1].getMessageNumber();
inbox.doCommand(new CustomProtocolCommand(start, end));
System.out.println("Fetching contents for " + start + ":" + end);
System.out.println("Size fetched = " + (totalSize / 1000000)
+ " Mo");
}
return nbMessages;
}
Do not that here I am using message numbers, which is unstable (these change if messages are erased from the server). A better method would be to use UIDs! Then you would change the command from FETCH to UID FETCH.
Hope this helps out!
You need to add a FetchProfile to the inbox before you iterate through the messages.
Message is a lazy loading object, it will return to the server for each message and for each
field that doesn't get provided with the default profile.
e.g.
for (Message message: messages) {
message.getSubject(); //-> goes to the imap server to fetch the subject line
}
If you want to display like an inbox listing of say just From, Subject, Sent, Attachement etc.. you would use something like the following
inbox.open(Folder.READ_ONLY);
Message[] messages = inbox.getMessages(start + 1, total);
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.ENVELOPE);
fp.add(FetchProfileItem.FLAGS);
fp.add(FetchProfileItem.CONTENT_INFO);
fp.add("X-mailer");
inbox.fetch(messages, fp); // Load the profile of the messages in 1 fetch.
for (Message message: messages) {
message.getSubject(); //Subject is already local, no additional fetch required
}
Hope that helps.
The total time includes the time required in cryptographic operations. The cryptographic operations need a random seeder. There are different random seeding implementations which provide random bits for use in the cryptography. By default, Java uses /dev/urandom and this is specified in your java.security as below:
securerandom.source=file:/dev/urandom
On Windows, java uses Microsoft CryptoAPI seed functionality which usually has no problems. However, on unix and linux, Java, by default uses /dev/random for random seeding. And read operations on /dev/random sometimes block and takes long time to complete. If you are using the *nix platforms then the time spent in this would get counted in the overall time.
Since, I dont know what platform you are using, I can't for sure say that this could be your problem. But if you are, then this could be one of reasons why your operations are taking long time. One of the solution to this could be to use /dev/urandom instead of /dev/random as your random seeder, which does not block. This can be specified with the system property "java.security.egd". For example,
-Djava.security.egd=file:/dev/urandom
Specifying this system property will override the securerandom.source setting in your java.security file. You can give it a try. Hope it helps.

Categories

Resources