I need to send a customized email to 400 clients.
I am doing this :
for (Client c : clients){
setUpEmail(c);
sendMail(c);
}
My problem is that my email provider authorizes me to send only 10 emails per minute. How could I do that in the loop?
Thanks.
Use Guava's RateLimiter.
If you already have Guava in your library path, or if you're interested in adding it, you can use this solution:
RateLimiter rateLimiter = RateLimiter.create(10/60d); // 10 permits per 60 seconds.
for (Client c : clients){
setUpEmail(c);
rateLimiter.acquire(1);
sendMail(c);
}
Your kind of problem is exactly why RateLimiter was created.
Use a counter and wait for a minute when ten mails were sent:
int counter = 0;
for (Client c : clients){
counter++;
setUpEmail(c);
sendMail(c);
if(counter%10==0){
Thread.sleep(60*1000); // wait a minute
}
}
This is not ideal since you may lost some time, e.g. when sending ten mails needs 20 seconds, you only may wait 40 seconds before starting a new bulk.
Another option would be to wait between each mail so that the time for 10 mails is at least 60 seconds:
for (Client c : clients){
setUpEmail(c);
sendMail(c);
Thread.sleep(6*1000); // wait 6 seconds
}
And a more sophisticated one:
int counter = 0;
long start = System.currentTimeMillis();
for (Client c : clients){
counter++;
setUpEmail(c);
sendMail(c);
if(counter%10==0){
long needed = System.currentTimeMillis() - start; // ms needed for ten mails
Thread.sleep(60*1000 - needed); // wait rest of the minute
start = System.currentTimeMillis(); // start of the next bulk
}
}
Deque clientsDeque = new ArrayDeque(clients);
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
Runnable task = () => {
for (int i=0; i<10; i++){
Client c = clientsDeque.poll();
setUpEmail(c);
sendMail(c);
}
}
executor.schedule(task, 60, TimeUnit.SECONDS);
Related
I have a use case where i am writing to a Kafka topic in batches using spark job (no streaming).Initially i pump-in suppose 10 records to Kafka topic and run the spark job which does some processing and finally write to another Kafka topic.
Next time when i push another 5 records and run the spark job, my requirement is to start processing these 5 records only not from starting offset. I need to maintain the committed offset so that spark job should run on next offset position and do the processing.
Here is code from kafka side to fetch the offset:
private static List<TopicPartition> getPartitions(KafkaConsumer consumer, String topic) {
List<PartitionInfo> partitionInfoList = consumer.partitionsFor(topic);
return partitionInfoList.stream().map(x -> new TopicPartition(topic, x.partition())).collect(Collectors.toList());
}
public static void getOffSet(KafkaConsumer consumer) {
List<TopicPartition> topicPartitions = getPartitions(consumer, topic);
consumer.assign(topicPartitions);
consumer.seekToBeginning(topicPartitions);
topicPartitions.forEach(x -> {
System.out.println("Partition-> " + x + " startingOffSet-> " + consumer.position(x));
});
consumer.assign(topicPartitions);
consumer.seekToEnd(topicPartitions);
topicPartitions.forEach(x -> {
System.out.println("Partition-> " + x + " endingOffSet-> " + consumer.position(x));
});
topicPartitions.forEach(x -> {
consumer.poll(1000) ;
OffsetAndMetadata offsetAndMetadata = consumer.committed(x);
long position = consumer.position(x);
System.out.printf("Committed: %s, current position %s%n", offsetAndMetadata == null ? null : offsetAndMetadata
.offset(), position);
});
}
Below code is for spark to load the messages from topic which is not working :
Dataset<Row> kafkaDataset = session.read().format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", topic)
.option("group.id", "test-consumer-group")
.option("startingOffsets","{\"Topic1\":{\"0\":2}}")
.option("endingOffsets", "{\"Topic1\":{\"0\":3}}")
.option("enable.auto.commit","true")
.load();
After above code executes i am again trying to get the offset by calling
getoffset(consumer)
from the topic which always reads from 0 offset and committed offset fetched initially keeps on increasing. I am new to kafka and still figuring out how to handle such scenarion.Please help here.
Initially i had 10 records in my topic, i published another 2 records and here is the o/p:
Output post getoffset method executes :
Partition-> Topic00-0 startingOffSet-> 0 Partition->
Topic00-0 endingOffSet-> 12 Committed: 12, current position
12
Output post spark code executes for loading messages.
Partition-> Topic00-0 startingOffSet-> 0 Partition->
Topic00-0 endingOffSet-> 12 Committed: 12, current position
12
I see no diff and . Please take a look and suggest resolution for this sceanario.
Using java concurrent executor, future cancel method not stopping the current task.
I have followed this solution of timeout and stop processing of current task. But it doesn't stop the processing.
I am trying this with cron job. Every 30 seconds my cron job gets executed and I am putting 10 seconds timeout. Debug comes in future cancel method, but it is not stopping current task.
Thank you.
#Scheduled(cron = "*/30 * * * * *")
public boolean cronTest()
{
System.out.println("Inside cron - start ");
DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Date date = new Date();
System.out.println(dateFormat.format(date));
System.out.println("Inside cron - end ");
ExecutorService executor = Executors.newCachedThreadPool();
Callable<Object> task = new Callable<Object>() {
public Object call() {
int i=1;
while(i<100)
{
System.out.println("i: "+ i++);
try {
TimeUnit.SECONDS.sleep(1);
}
catch(Exception e)
{
}
}
return null;
}
};
Future<Object> future = executor.submit(task);
try {
Object result = future.get(10, TimeUnit.SECONDS);
} catch (Exception e)
} finally {
future.cancel(true);
return true;
}
}
The expected result is cron job runs every 30 seconds and after 10 seconds it should time out and wait for approximately 20 seconds for a cron job to start again. And should not continue the older loop because we have timeout on 10 seconds.
Current result is:
Inside cron - start
2019/07/25 11:09:00
Inside cron - end
i: 1
i: 2
i: 3
i: 4 ... upto i: 31
Inside cron - start
2019/07/25 11:09:30
Inside cron - end
i: 1
i: 32
i: 2
i: 3
i: 33
...
Expected result is:
Inside cron - start
2019/07/25 11:09:00
Inside cron - end
i: 1
i: 2
i: 3
i: 4 ... upto i: 10
Inside cron - start
2019/07/25 11:09:30
Inside cron - end
i: 1
i: 2
i: 3 ... upto i:10
The first problem is in this part of code:
catch(Exception e)
{
}
When you invoke future.cancel(true); your thread is being interrupted with Thread.interrupt()
Which means that when a thread is sleeping, it gets awoken and throws InterruptedException which is caught by the catch block and ignored. To fix this problem you have to handle this exception:
catch(InterruptedException e) {
break; //breaking from the loop
}
catch(Exception e)
{
}
The second problem: Thread.interrupt() may be invoked while the thread is not sleeping. In this case InterruptedException is not thrown. Instead, the interrupted flag of the thread is raised. What you have to do is to check for this flag from time to time, and if it's raised, handle interruption. The basic code for it would look like:
try {
if (Thread.currentThread().isInterrupted()) {
break;
}
TimeUnit.SECONDS.sleep(1);
}
...
// rest of the code
UPDATE:
Here's the full code of Callable:
Callable<Object> task = new Callable<Object>() {
public Object call() {
int i=1;
while(i<100)
{
System.out.println("i: "+ i++);
try {
if (Thread.currentThread().isInterrupted()) {
break; //breaking from the while loop
}
TimeUnit.SECONDS.sleep(1);
} catch(InterruptedException e) {
break; //breaking from the while loop
} catch(Exception e)
{
}
}
return null;
}
};
My app gets traffic updates from an API (this works) and returns a JSON array, which i'm then taking each element of in a while loop (JSONobject) and attempting to update a TextView with each result every 5 seconds.
However, my script is waiting 15 seconds and then updating to the last value. I've done some research and it says to use asynctask, which I have done, but it has not made a difference.
I've added System.out.println(thestring_to_update_to), and this is working as I would like my app to do (changing every 5 seconds).
The following is in a try/catch block :
JSONArray TrafficInformation = new JSONArray(response);
int TrafficEvents = TrafficInformation.length();
int TrafficEvent = 0;
JSONObject CurrentEvent = new JSONObject();
do{
CurrentEvent = new JSONObject(TrafficInformation.getString(TrafficEvent));
TextView affected_route = (TextView)findViewById(R.id.disrupted_route);
try {
Object[] passTo = new Object[1];
passTo[0] = CurrentEvent.getString("9");
System.out.println(passTo[0]);
new tasker().doInBackground(passTo);
TrafficEvent++;
Thread.sleep(5000);
} catch (Exception e){
Toast.makeText(LiftShare.this, "There was an error with getting traffic info.", Toast.LENGTH_LONG).show();
}
} while (TrafficEvent < TrafficEvents);
I also have this public class
public class tasker extends AsyncTask {
#Override
protected Object[] doInBackground(Object[] Objects) {
TextView affected_route = (TextView)findViewById(R.id.disrupted_route);
affected_route.setText(Objects[0].toString());
return null;
};
}
this is the JSONArray that goes in to the code (It is formatted correctly)
Array
(
[0] => {"1":"Congestion","2":"Minor Disruption - up to 15 minutes delay","3":"Location : The M3 eastbound exit slip at junction J9 . \nReason : Congestion. \nStatus : Currently Active. \nReturn To Normal : Normal traffic conditions are expected between 11:30 and 11:45 on 25 January 2018. \nDelay : There are currently delays of 10 minutes against expected traffic. \n","7":"M3 J9 eastbound exit | Eastbound | Congestion","9":"M3","10":"South East","11":"Hampshire","14":"2018-01-25T11:22:38+00:00"}
[1] => {"1":"Overturned Vehicle","2":"Severe Disruption - in excess of 3 hours delay or road closure","3":"Location : The M3 westbound between junctions J8 and J9 . \nReason : Clearing the scene of an overturned vehicle. \nStatus : Currently Active. \nTime To Clear : The event is expected to clear between 14:45 and 15:00 on 25 January 2018. \nReturn To Normal : Normal traffic conditions are expected between 14:45 and 15:00 on 25 January 2018. \nLanes Closed : All lanes are closed. \nPrevious Reason : Following an earlier accident. \n","7":"M3 westbound between J8 and J9 | Westbound | Overturned Vehicle","9":"M3","10":"South East","11":"Hampshire","14":"2018-01-25T06:51:12+00:00"}
[2] => {"1":"Congestion","2":"Moderate Disruption - between 15 minutes and 3 hours delay","3":"Location : The A34 southbound between the A272 and the junction with the M3 . \nReason : Congestion. \nStatus : Currently Active. \nReturn To Normal : Normal traffic conditions are expected between 12:45 and 13:00 on 25 January 2018. \nDelay : There are currently delays of 40 minutes against expected traffic. \n","7":"A34 southbound within the A272 junction | Southbound | Congestion","9":"A34","10":"South East","11":"Hampshire","14":"2018-01-25T07:48:23+00:00"}
)
How can I get the textview to update to the new value every 5 seconds?
You have to use
new tasker().execute(passTo);
to start asynctask as a thread otherwise, with current implementation, it will just act as a normal method call
Note: you cannot update UI from background thread i.e. inside doInBackground, instead override onPostExecute which runs on UI thread
#Override
protected Object[] doInBackground(Object[] Objects) {
TextView affected_route = (TextView)findViewById(R.id.disrupted_route);
//affected_route.setText(Objects[0].toString()); crash, instead do this in onPostExecute
return null;
};
Update : you can use postDelayed with delay to update UI after some interval
int i = 0;
affected_route.postDelayed(new Runnable() {
public void run() {
textView.setText(yourText);
}
},i+=5000);
AsyncTask seems like a overkill for your requirement as you are not really doing any work in the background. You could schedule the text to be updated after a time period using a Handler (from android.os) like this:
Handler handler = new Handler(Looper.getMainLooper());
Runnable textUpdater = new Runnable() {
#Override
public void run() {
// this needs to execute in the UI thread
affected_route.setText(lastUpdate);
}
};
String lastUpdate = "Store your last update here";
void updateText(){
handler.postDelayed(textUpdater, 5000);
}
We've performed a performance test with Oracle Advanced Queue on our Oracle DB environment. We've created the queue and the queue table with the following script:
BEGIN
DBMS_AQADM.create_queue_table(
queue_table => 'verisoft.qt_test',
queue_payload_type => 'SYS.AQ$_JMS_MESSAGE',
sort_list => 'ENQ_TIME',
multiple_consumers => false,
message_grouping => 0,
comment => 'POC Authorizations Queue Table - KK',
compatible => '10.0',
secure => true);
DBMS_AQADM.create_queue(
queue_name => 'verisoft.q_test',
queue_table => 'verisoft.qt_test',
queue_type => dbms_aqadm.NORMAL_QUEUE,
max_retries => 10,
retry_delay => 0,
retention_time => 0,
comment => 'POC Authorizations Queue - KK');
DBMS_AQADM.start_queue('q_test');
END;
/
We've published 1000000 messages with 2380 TPS using a PL/SQL client. And we've consumed 1000000 messages with 292 TPS, using Oracle JMS API Client.
The consumer rate is almost 10 times slower than the publisher and that speed does not meet our requirements.
Below, is the piece of Java code that we use to consume messages:
if (q == null) initializeQueue();
System.out.println(listenerID + ": Listening on queue " + q.getQueueName() + "...");
MessageConsumer consumer = sess.createConsumer(q);
for (Message m; (m = consumer.receive()) != null;) {
new Timer().schedule(new QueueExample(m), 0);
}
sess.close();
con.close();
Do you have any suggestion on, how we can improve the performance at the consumer side?
Your use of Timer may be your primary issue. The Timer definition reads:
Corresponding to each Timer object is a single background thread that is used to execute all of the timer's tasks, sequentially. Timer tasks should complete quickly. If a timer task takes excessive time to complete, it "hogs" the timer's task execution thread. This can, in turn, delay the execution of subsequent tasks, which may "bunch up" and execute in rapid succession when (and if) the offending task finally completes.
I would suggest you use a ThreadPool.
// My executor.
ExecutorService executor = Executors.newCachedThreadPool();
public void test() throws InterruptedException {
for (int i = 0; i < 1000; i++) {
final int n = i;
// Instead of using Timer, create a Runnable and pass it to the Executor.
executor.submit(new Runnable() {
#Override
public void run() {
System.out.println("Run " + n);
}
});
}
executor.shutdown();
executor.awaitTermination(1, TimeUnit.DAYS);
}
I am getting Timeout exceptions even though there is not much load on the Couchbase server.
net.spy.memcached.OperationTimeoutException: Timeout waiting for value
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1003)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1018)
at com.eos.cache.CacheClient.get(CacheClient.java:280)
at com.eos.cache.GenericCacheAccessObject.get(GenericCacheAccessObject.java:55)
...
...
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: /192.168.4.12:11210
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:157)
at net.spy.memcached.internal.GetFuture.get(GetFuture.java:62)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:997)
...30 more
This is how I am creating the client.
List<URI> uris = new ArrayList<URI>();
String[] serverTokens = getServers().split(" ");
for (int index = 0; index < serverTokens.length; index++) {
uris.add(new URI(serverTokens[index]));
}
CouchbaseConnectionFactoryBuilder ccfb = new CouchbaseConnectionFactoryBuilder();
ccfb.setProtocol(Protocol.BINARY);
ccfb.setOpTimeout(10000); // wait up to 10 seconds for an operation to
// succeed
ccfb.setOpQueueMaxBlockTime(5000); // wait up to 5 seconds when trying
// to enqueue an operation
ccfb.setMaxReconnectDelay(1500);
CouchbaseConnectionFactory cf = ccfb.buildCouchbaseConnection(uris, bucket, "");
CouchbaseClient client = new CouchbaseClient(cf);
I am maintaining a pool of persistent clients in our web server. And we are not even touching the max conn limit which has been set to 15 only.
Pls help me guys in solving this.