My Java application does quite a lot of synchronization with databases. Each such single event is logged, e.g.
logger.info("Starting synchronization...");
synchronizeWithDatabase();
logger.info("Synchronization has ended.");
This clutters logs quite a lot. Would it be possible to log a summary every hour (e.g. "there were 60 successful synchronization events from 12:00:00 to 13:00:00") and only log errors per occurrence? I'm using slf4j Logger
The straightforward way is to write code to do what you want. Write a wrapper class that, for each "event", tracks the last time you actually logged the event, and the number of times it has occurred.
Then on a call for which the time-since-logged exceeds an hour, actually write the log message, and reset the counter.
Here's a sketch off the top of my head. Hasn't been near a compiler. Not intended as a fully-baked solution.
class QuenchData {
private static final long delta = 1000 * 60 * 60; // 1 hr
private long lastLogTime;
private int count;
void log(String message, Logger logger) {
long now = System.getTimeInMillis();
if (now < lastLogTime || now-lastLogTime >= delta) {
String andAlso = String.format(" [occurred %d times]", count);
logger.log(message + andAlso);
lastLogTime = now;
count = 0;
} else
count++;
}
}
}
class QuenchedLogger {
private Logger logger = new Logger(...whatever...);
private Map<String, QuenchData> history = new HashMap<>();
:
synchronized void log(String message) {
QuenchData qd = history.get(message);
if (qd == null)
history.put(message, (qd = new QuenchData()));
qd.log(message, logger);
}
}
Related
I am writing a Java program by using the Binance JAVA API to retrieve the 1-minute interval candelsticks of a trading pair. Using this Java class, I want to calculate the EMA (Exponential Moving Average) of the past 10 days.
The Binance JAVA API websocket implementation gets the latest depth events, which also contains the current closing price that I use to update the EMA calculation by calling the EMA#update method.
However, I notice that the EMA showing on the Binance's graph, does not correspond to the ones I get from my code. Also, I notice that the values need some time to 'settle' before giving (somewhat) same values compared to those shown on Binance.
On TradingView I found a formula to calculate the EMA (that shows the same EMA value as on Binance), but that is different than the one used in the EMA class. However, even when using this formula, the values are very different than the one on Binance.
Could someone help me figure out what the issue is and how to obtain the same values?
UPDATE 1: code provided
import java.util.*;
import java.util.stream.Collectors;
import com.binance.api.client.BinanceApiClientFactory;
import com.binance.api.client.BinanceApiRestClient;
import com.binance.api.client.BinanceApiWebSocketClient;
import com.binance.api.client.domain.market.Candlestick;
import com.binance.api.client.domain.market.CandlestickInterval;
import core.util.text.DecimalFormat;
import core.util.text.StringUtil;
public class test_003
{
private Map<Long, Candlestick> candlesticksCache = new TreeMap<>();
private EMA EMA_10;
private EMA EMA_20;
public static void main(String[] pArgs)
{
new test_003();
}
private test_003()
{
Locale.setDefault(Locale.US);
candlesticksCacheExample("ADAUSDT", CandlestickInterval.ONE_MINUTE);
}
private void candlesticksCacheExample(String symbol, CandlestickInterval interval)
{
initializeCandlestickCache(symbol, interval);
startCandlestickEventStreaming(symbol, interval);
}
private void initializeCandlestickCache(String symbol, CandlestickInterval interval)
{
BinanceApiClientFactory factory = BinanceApiClientFactory.newInstance();
BinanceApiRestClient client = factory.newRestClient();
List<Candlestick> candlestickBars_10 = client.getCandlestickBars(symbol.toUpperCase(), interval, Integer.valueOf(11), null, null);
List<Candlestick> candlestickBars_20 = client.getCandlestickBars(symbol.toUpperCase(), interval, Integer.valueOf(21), null, null);
List<Double> closingPriceList_10 = candlestickBars_10.stream().map(c -> Double.valueOf(c.getClose())).collect(Collectors.toList());
List<Double> closingPriceList_20 = candlestickBars_20.stream().map(c -> Double.valueOf(c.getClose())).collect(Collectors.toList());
EMA_10 = new EMA(closingPriceList_10, Integer.valueOf(10));
EMA_20 = new EMA(closingPriceList_20, Integer.valueOf(20));
}
private void startCandlestickEventStreaming(String symbol, CandlestickInterval interval)
{
BinanceApiClientFactory factory = BinanceApiClientFactory.newInstance();
BinanceApiWebSocketClient client = factory.newWebSocketClient();
client.onCandlestickEvent(symbol.toLowerCase(), interval, response -> {
Long openTime = response.getOpenTime();
Candlestick updateCandlestick = candlesticksCache.get(openTime);
if (updateCandlestick == null)
{
// new candlestick
updateCandlestick = new Candlestick();
}
// update candlestick with the stream data
updateCandlestick.setOpenTime(response.getOpenTime());
updateCandlestick.setOpen(response.getOpen());
updateCandlestick.setLow(response.getLow());
updateCandlestick.setHigh(response.getHigh());
updateCandlestick.setClose(response.getClose());
updateCandlestick.setCloseTime(response.getCloseTime());
updateCandlestick.setVolume(response.getVolume());
updateCandlestick.setNumberOfTrades(response.getNumberOfTrades());
updateCandlestick.setQuoteAssetVolume(response.getQuoteAssetVolume());
updateCandlestick.setTakerBuyQuoteAssetVolume(response.getTakerBuyQuoteAssetVolume());
updateCandlestick.setTakerBuyBaseAssetVolume(response.getTakerBuyQuoteAssetVolume());
// Store the updated candlestick in the cache
candlesticksCache.put(openTime, updateCandlestick);
double closingPrice = Double.valueOf(updateCandlestick.getClose());
EMA_10.update(closingPrice);
EMA_20.update(closingPrice);
System.out.println(StringUtil.replacePlaceholders("Closing price: %1 | EMA(10): %2 - EMA(20): %3", response.getClose(),
DecimalFormat.format(EMA_10.get(), "#.#####"),
DecimalFormat.format(EMA_20.get(), "#.#####")));
});
}
public class EMA
{
private double currentEMA;
private final int period;
private final double multiplier;
private final List<Double> EMAhistory;
private final boolean historyNeeded;
private String fileName;
public EMA(List<Double> closingPrices, int period)
{
this(closingPrices, period, false);
}
public EMA(List<Double> closingPrices, int period, boolean historyNeeded)
{
currentEMA = 0;
this.period = period;
this.historyNeeded = historyNeeded;
this.multiplier = 2.0 / (double) (period + 1);
this.EMAhistory = new ArrayList<>();
init(closingPrices);
}
public double get()
{
return currentEMA;
}
public double getTemp(double newPrice)
{
return (newPrice - currentEMA) * multiplier + currentEMA;
}
public void init(List<Double> closingPrices)
{
if (period > closingPrices.size()) return;
//Initial SMA
for (int i = 0; i < period; i++)
{
currentEMA += closingPrices.get(i);
}
currentEMA = currentEMA / (double) period;
if (historyNeeded) EMAhistory.add(currentEMA);
//Dont use latest unclosed candle;
for (int i = period; i < closingPrices.size() - 1; i++)
{
update(closingPrices.get(i));
}
}
public void update(double newPrice)
{
// EMA = (Close - EMA(previousBar)) * multiplier + EMA(previousBar)
currentEMA = (newPrice - currentEMA) * multiplier + currentEMA;
if (historyNeeded) EMAhistory.add(currentEMA);
}
public int check(double newPrice)
{
return 0;
}
public String getExplanation()
{
return null;
}
public List<Double> getEMAhistory()
{
return EMAhistory;
}
public int getPeriod()
{
return period;
}
}
}
UPDATE 2
The problem is that onCandlestickEvent is not just called when a candle is completed, but actually multiple times per minute (every 2 seconds or so). The data that you are receiving in the response spans the time from when the candle is opened until the event time of the response, whether the candle is completed or not.
To see what I mean, you could replace the System.out() statement in your startCandlestickEventStreaming method with the following:
System.out.println(response.getOpenTime() + ";" +
response.getEventTime() + ";" +
response.getCloseTime());
You will see that the close time of the candle actually lies in the future.
In order to update your EMA correctly, you will have to wait until the candle has actually been completed. You could store the open time of the tentative candle in a member variable, check if it has changed since the last time onCandlestickEvent was called, and then update your EMA with the final close value of the candle:
client.onCandlestickEvent(symbol.toLowerCase(), interval, response -> {
Long openTime = response.getOpenTime();
Candlestick updateCandlestick = candlesticksCache.get(openTime);
if (updateCandlestick == null)
{
// new candlestick
updateCandlestick = new Candlestick();
}
// update candlestick with the stream data
...
// Store the updated candlestick in the cache
candlesticksCache.put(openTime, updateCandlestick);
if (openTime > m_LastOpenTime)
{
// need to get the close of the PREVIOUS candle
Candlestick previousCandle = candlesticksCache.get(m_LastOpenTime);
double closingPrice = Double.valueOf(previousCandle.getClose());
EMA_10.update(closingPrice);
EMA_20.update(closingPrice);
System.out.println(StringUtil.replacePlaceholders("Closing price: %1 | EMA(10): %2 - EMA(20): %3", response.getClose(),
DecimalFormat.format(EMA_10.get(), "#.#####"),
DecimalFormat.format(EMA_20.get(), "#.#####")));
m_LastOpenTime = openTime;
}
});
You'll probably get an exception on the first response, because there are no candles on the stack yet and we don't have a m_LastOpenTime. You could get the current server time before calling client.onCandlestickEvent():
private void startCandlestickEventStreaming(String symbol, CandlestickInterval interval)
{
BinanceApiClientFactory factory = BinanceApiClientFactory.newInstance();
BinanceApiWebSocketClient client = factory.newWebSocketClient();
BinanceApiRestClient restClient = factory.newRestClient();
m_LastOpenTime = restClient.getServerTime();
client.onCandlestickEvent(symbol.toLowerCase(), interval, response -> {
...
}
}
I noticed there's actually a much simpler way than my other answer. I'm leaving that one up, however, because it could still be relevant for dealing with a dodgy connection where you can't necessarily rely on always getting the final candlestick with your response.
The response.getBarFinal()) method allows for testing whether the response you have received is the final candlestick or if it is just an intermediate one. If you change your code as follows, your EMA will only get updated with the final close value of the candle as it should be:
if (response.getBarFinal())
{
double closingPrice = Double.valueOf(updateCandlestick.getClose());
EMA_10.update(closingPrice);
EMA_20.update(closingPrice);
System.out.println(StringUtil.replacePlaceholders("Closing price: %1 | EMA(10): %2 - EMA(20): %3", response.getClose(),
DecimalFormat.format(EMA_10.get(), "#.#####"),
DecimalFormat.format(EMA_20.get(), "#.#####")));
}
I use UsageStats feature of Android, but the smallest interval is DAILY INTERVAL.
long time = System.currentTimeMillis();
List<UsageStats> appList = manager.queryUsageStats(UsageStatsManager.INTERVAL_DAILY, time - DAY_IN_MILLI_SECONDS, time);
How can I get UsageStats in an hourly interval?
All credit goes to this answer. I have learned from that one.
How can we collect app usage data for customized time range (e.g. for per 1 hour)?
We have to call queryEvents(long begin_time, long end_time) method as it will provide us all data starting from begin_time to end_time. It give us each app data through foreground and background events instead of total spent time like queryUsageStats() method. So, using foreground and background events time stamp, we can count the number of times an app has been launched and also can find out the usage duration for each app.
Implementation to Collect Last 1 Hour App Usage Data
At first, add the following line in the AndroidManifest.xml file and also request user to get permission of usage access.
<uses-permission android:name="android.permission.PACKAGE_USAGE_STATS" />
Add the following lines inside any method
long hour_in_mil = 1000*60*60; // In Milliseconds
long end_time = System.currentTimeMillis();
long start_time = end_time - hour_in_mil;
Then, call the method getUsageStatistics()
getUsageStatistics(start_time, end_time);
getUsageStatistics methiod
#RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
void getUsageStatistics(long start_time, long end_time) {
UsageEvents.Event currentEvent;
// List<UsageEvents.Event> allEvents = new ArrayList<>();
HashMap<String, AppUsageInfo> map = new HashMap<>();
HashMap<String, List<UsageEvents.Event>> sameEvents = new HashMap<>();
UsageStatsManager mUsageStatsManager = (UsageStatsManager)
context.getSystemService(Context.USAGE_STATS_SERVICE);
if (mUsageStatsManager != null) {
// Get all apps data from starting time to end time
UsageEvents usageEvents = mUsageStatsManager.queryEvents(start_time, end_time);
// Put these data into the map
while (usageEvents.hasNextEvent()) {
currentEvent = new UsageEvents.Event();
usageEvents.getNextEvent(currentEvent);
if (currentEvent.getEventType() == UsageEvents.Event.ACTIVITY_RESUMED ||
currentEvent.getEventType() == UsageEvents.Event.ACTIVITY_PAUSED) {
// allEvents.add(currentEvent);
String key = currentEvent.getPackageName();
if (map.get(key) == null) {
map.put(key, new AppUsageInfo(key));
sameEvents.put(key,new ArrayList<UsageEvents.Event>());
}
sameEvents.get(key).add(currentEvent);
}
}
// Traverse through each app data which is grouped together and count launch, calculate duration
for (Map.Entry<String,List<UsageEvents.Event>> entry : sameEvents.entrySet()) {
int totalEvents = entry.getValue().size();
if (totalEvents > 1) {
for (int i = 0; i < totalEvents - 1; i++) {
UsageEvents.Event E0 = entry.getValue().get(i);
UsageEvents.Event E1 = entry.getValue().get(i + 1);
if (E1.getEventType() == 1 || E0.getEventType() == 1) {
map.get(E1.getPackageName()).launchCount++;
}
if (E0.getEventType() == 1 && E1.getEventType() == 2) {
long diff = E1.getTimeStamp() - E0.getTimeStamp();
map.get(E0.getPackageName()).timeInForeground += diff;
}
}
}
// If First eventtype is ACTIVITY_PAUSED then added the difference of start_time and Event occuring time because the application is already running.
if (entry.getValue().get(0).getEventType() == 2) {
long diff = entry.getValue().get(0).getTimeStamp() - start_time;
map.get(entry.getValue().get(0).getPackageName()).timeInForeground += diff;
}
// If Last eventtype is ACTIVITY_RESUMED then added the difference of end_time and Event occuring time because the application is still running .
if (entry.getValue().get(totalEvents - 1).getEventType() == 1) {
long diff = end_time - entry.getValue().get(totalEvents - 1).getTimeStamp();
map.get(entry.getValue().get(totalEvents - 1).getPackageName()).timeInForeground += diff;
}
}
smallInfoList = new ArrayList<>(map.values());
// Concatenating data to show in a text view. You may do according to your requirement
for (AppUsageInfo appUsageInfo : smallInfoList)
{
// Do according to your requirement
strMsg = strMsg.concat(appUsageInfo.packageName + " : " + appUsageInfo.launchCount + "\n\n");
}
TextView tvMsg = findViewById(R.id.MA_TvMsg);
tvMsg.setText(strMsg);
} else {
Toast.makeText(context, "Sorry...", Toast.LENGTH_SHORT).show();
}
}
AppUsageInfo.class
import android.graphics.drawable.Drawable;
class AppUsageInfo {
Drawable appIcon; // You may add get this usage data also, if you wish.
String appName, packageName;
long timeInForeground;
int launchCount;
AppUsageInfo(String pName) {
this.packageName=pName;
}
}
How can I customize these codes to collect per 1 hour data?
As you want to get per hour data, please change the end_time and start_time value for every hour data. For instance: If I would try to collect past per hour data (for past 2 hour data). I would do the following thing.
long end_time = System.currentTimeMillis();
long start_time = end_time - (1000*60*60);
getUsageStatistics(start_time, end_time);
end_time = start_time;
start_time = start_time - hour_in_mil;
getUsageStatistics(start_time, end_time);
However, you may use a Handler to skip repeatedly writing start_time and end_time to change value of these variables. Each time data will be collected for one hour, a task will be completed and after automatically changing the values of the variables, you will again call the getUsageStatistics method.
Note: Maybe, you will not be able to retrieve data for more than past 7.5 days as events are only kept by the system for a few days.
Calendar cal = (Calendar) Calendar.getInstance().clone();
//I used this and it worked, only for 7 days and a half ago
if (daysAgo == 0) {
//Today - I only count from 00h00m00s today to present
end = cal.getTimeInMillis();
start = LocalDate.now().toDateTimeAtStartOfDay().toInstant().getMillis();
} else {
long todayStartOfDayTimeStamp = LocalDate.now().toDateTimeAtStartOfDay().toInstant().getMillis();
if (mDaysAgo == -6) {
//6 days ago, only get events in time -7 days to -7.5 days
cal.setTimeInMillis(System.currentTimeMillis());
cal.add(Calendar.DATE, daysAgo + 1);
end = cal .getTimeInMillis();
start = end - 43200000;
} else {
//get events from 00h00m00s to 23h59m59s
//Current calendar point to 0h0m today
cal.setTimeInMillis(todayStartOfDayTimeStamp);
cal.add(Calendar.DATE, daysAgo + 1);
end = calendar.getTimeInMillis();
cal.add(Calendar.DATE, -1);
start = calendar.getTimeInMillis();
}
}
I don't think it's possible, even if you ask for data in the middle of an interval, it looks like the data is stored in buckets and the minimum bucket is a day.
In UsageStatsManager documentation, it says:
A request for data in the middle of a time interval will include that interval.
Also, INTERVAL_BEST is not a real interval, it just selects one of the available intervals for the given time range. In
UsageStatsManager.java source code, it says:
/**
* The number of available intervals. Does not include {#link #INTERVAL_BEST}, since it
* is a pseudo interval (it actually selects a real interval).
* {#hide}
*/
public static final int INTERVAL_COUNT = 4;
Yes, Android is providing minimum INTERVAL_DAILY. But for the best result, you can use INTERVAL_BEST. Android is giving the best interval timer for the given time range in queryUsageStats(int, long, long).
Happy coding...
I'm ingesting a stream of data into Flink. For each 'instance' of this data, I have a timestamp. I can detect if the machine I'm getting the data from is 'producing' or 'not producing', this is done via a custom flat map function that's located in it's own static class.
I want to calculate how long the machine has been producing / not producing.
My current approach is collecting the production and non production timestamps in two plain lists. For each 'instance' of the data, I calculate the current production/non-production duration by subtracting the latest timestamp from the earliest timestamp. This is giving me incorrect results, though. When the production state changes from producing to non producing, I clear the timestamp list for producing and vice versa, so that if the production starts again, the duration starts from zero.
I've looked into the two lists I collect the respective timestamps in and I see things I don't understand. My assumption is that, as long as the machine 'produces', the first timestamp in the production timestamp list stays the same, while new timestamps are added to the list per new instance of data.
Apparantly, this assumption is wrong since I get seemingly random timestamps in the lists. They are still correctly ordered, though.
Here's my code for the flatmap function:
public static class ImaginePaperDataConverterRich extends RichFlatMapFunction<ImaginePaperData, String> {
private static final long serialVersionUID = 4736981447434827392L;
private transient ValueState<ProductionState> stateOfProduction;
SimpleDateFormat dateFormat = new SimpleDateFormat("dd.MM.yyyy HH:mm:ss.SS");
DateFormat timeDiffFormat = new SimpleDateFormat("dd HH:mm:ss.SS");
String timeDiffString = "00 00:00:00.000";
List<String> productionTimestamps = new ArrayList<>();
List<String> nonProductionTimestamps = new ArrayList<>();
public String calcProductionTime(List<String> timestamps) {
if (!timestamps.isEmpty()) {
try {
Date firstDate = dateFormat.parse(timestamps.get(0));
Date lastDate = dateFormat.parse(timestamps.get(timestamps.size()-1));
long timeDiff = lastDate.getTime() - firstDate.getTime();
if (timeDiff < 0) {
System.out.println("Something weird happened. Maybe EOF.");
return timeDiffString;
}
timeDiffString = String.format("%02d %02d:%02d:%02d.%02d",
TimeUnit.MILLISECONDS.toDays(timeDiff),
TimeUnit.MILLISECONDS.toHours(timeDiff) % TimeUnit.HOURS.toHours(1),
TimeUnit.MILLISECONDS.toMinutes(timeDiff) % TimeUnit.HOURS.toMinutes(1),
TimeUnit.MILLISECONDS.toSeconds(timeDiff) % TimeUnit.MINUTES.toSeconds(1),
TimeUnit.MILLISECONDS.toMillis(timeDiff) % TimeUnit.SECONDS.toMillis(1));
} catch (ParseException e) {
e.printStackTrace();
}
System.out.println("State duration: " + timeDiffString);
}
return timeDiffString;
}
#Override
public void open(Configuration config) {
ValueStateDescriptor<ProductionState> descriptor = new ValueStateDescriptor<>(
"stateOfProduction",
TypeInformation.of(new TypeHint<ProductionState>() {}),
ProductionState.NOT_PRODUCING);
stateOfProduction = getRuntimeContext().getState(descriptor);
}
#Override
public void flatMap(ImaginePaperData ImaginePaperData, Collector<String> output) throws Exception {
List<String> warnings = new ArrayList<>();
JSONObject jObject = new JSONObject();
String productionTime = "0";
String nonProductionTime = "0";
// Data analysis
if (stateOfProduction == null || stateOfProduction.value() == ProductionState.NOT_PRODUCING && ImaginePaperData.actSpeedCl > 60.0) {
stateOfProduction.update(ProductionState.PRODUCING);
} else if (stateOfProduction.value() == ProductionState.PRODUCING && ImaginePaperData.actSpeedCl < 60.0) {
stateOfProduction.update(ProductionState.NOT_PRODUCING);
}
if(stateOfProduction.value() == ProductionState.PRODUCING) {
if (!nonProductionTimestamps.isEmpty()) {
System.out.println("Production has started again, non production timestamps cleared");
nonProductionTimestamps.clear();
}
productionTimestamps.add(ImaginePaperData.timestamp);
System.out.println(productionTimestamps);
productionTime = calcProductionTime(productionTimestamps);
} else {
if(!productionTimestamps.isEmpty()) {
System.out.println("Production has stopped, production timestamps cleared");
productionTimestamps.clear();
}
nonProductionTimestamps.add(ImaginePaperData.timestamp);
warnings.add("Production has stopped.");
System.out.println(nonProductionTimestamps);
//System.out.println("Production stopped");
nonProductionTime = calcProductionTime(nonProductionTimestamps);
}
// The rest is just JSON stuff
Do I maybe have to hold these two timestamp lists in a ListState?
EDIT: Because another user asked, here is the data I'm getting.
{'szenario': 'machine01', 'timestamp': '31.10.2018 09:18:39.432069', 'data': {1: 100.0, 2: 100.0, 101: 94.0, 102: 120.0, 103: 65.0}}
The behaviour I expect is that my flink program collects the timestamps in the two lists productionTimestamps and nonProductionTimestamps. Then I want my calcProductionTime method to subtract the last timestamp in the list from the first timestamp, to get the duration between when I first detected the machine is "producing" / "not-producing" and the time it stopped "producing" / "not-producing".
I found out that the reason for the 'seemingly random' timestamps is Apache Flink's parallel execution. When the parallelism is set to > 1, the order of events isn't guaranteed anymore.
My quick fix was to set the parallelism of my program to 1, this guarantees the order of events, as far as I know.
The Drools version is 6.2.0, and I'am using the Stream Mode.
I used #timestamp to tell engine to use the timestamp from the event's attribute.
The question is the number of Facts in WorkingMemory is larger and larger, and facts not retract even the fact is expired(10s).
I tryed to use Pseudo Clock, but it also take No effect.
this is my drl:
package test.drools
import test.drools.LogEntry;
declare LogEntry
#role(event)
#timestamp(callDateTime)
end
rule "sliding-window-test"
when
$msgA: LogEntry($sip: sourceIP)
Number(intValue > 2) from accumulate (
$msgB: LogEntry(sourceIP == $sip, this after $msgA) over window:time(10s); count($msgB))
then
System.out.println("rule sliding-window-test action actived!!");
retract($msgA)
end
this my code:
public class LogEntry {
private String logcontent = null;
private String[] logFieldStrArray = null;
private String sourceIP = null;
private long callDateTime;
public LogEntry(String content) {
this.logcontent = content;
if (logFieldStrArray == null) {
logFieldStrArray = logcontent.split("\\,");
}
sourceIP = logFieldStrArray[6];
**callDateTime = System.nanoTime();**
}
public long getcallDateTime() {
return callDateTime;
}
public String getsourceIP() {
return sourceIP;
}
}
The session configuration is correct, here just show how to call clock advanceTime.
use Pseudo Clock, advanceTime.
public class DroolsSession {
private long beginTime = 0, curTime = 0;
private statfulKsession;
private Object syncobject;
public void InsertAndFireAll(Object obj) {
synchronized(syncobject) {
if (beginTime == 0) {
beginTime = ((LogEntry)obj).getcallDateTime();
} else {
curTime = ((LogEntry)obj).getcallDateTime();
long l = advanceTime(curTime - beginTime, TimeUnit.NANOSECONDS);
beginTime = curTime;
}
statfulKsession.insert(obj);
statfulKsession.fireAllRules();
}
}
}
By the way, I use the System.nanoTime(), does Drools support nanoTime?
I'm looking forward to your answer.It is my pleasure.
What the condition of rule "sliding-window-test" says is:
If there is a LogEntry event A (no matter what, no matter how old) and
if there are more that two LogEntry events later than A and with the same sourceIP that have arrived in the last 10 seconds: then retract A.
This does not permit to retract LogEntry events automatically, because it is
always possible for the second condition to be fulfilled at some later time.
I have a requirement of reading User Information from 2 different sources (db) per userId and storing consolidated information in a Map with key as userId. Users in numbers can vary based on period they have opted for. Group of users may belong to different Period of Year.eg daily, weekly, monthly users.
I used HashMap and LinkedHashMap to get this done. As it slows down the process and to make it faster, I thought of using Threading here.
Reading some tutorials and examples now I am using ConcurrentHashMap and ExecutorService.
In cases based on some validation I want to skip the current iteration and move to next User info. It doesnot allow to use continue keyword to use within for loop. Is there any way to achieve same differently within Multithreaded code.
Moreover below code piece though it works, but its not significantly that faster than the code without threading which creates doubt if Executor Service is implemented correctly.
How do we debug in case we get any error in Multithreaded code. Execution holds at debug point but its not consistent and it does not move to next line with F6.
Can someone point out if I am missing something in the code. Or any other example of simillar use case also can be of great help.
public void getMap() throws UserException
{
long startTime = System.currentTimeMillis();
Map<String, Map<Integer, User>> map = new ConcurrentHashMap<String, Map<Integer, User>>();
//final String key = "";
try
{
final Date todayDate = new Date();
List<String> applyPeriod = db.getPeriods(todayDate);
for (String period : applyPeriod)
{
try
{
final String key = period;
List<UserTable1> eligibleUsers = db.findAllUsers(key);
Map<Integer, User> userIdMap = new ConcurrentHashMap<Integer, User>();
ExecutorService executor = Executors.newFixedThreadPool(eligibleUsers.size());
CompletionService<User> cs = new ExecutorCompletionService<User>(executor);
int userCount=0;
for (UserTable1 eligibleUser : eligibleUsers)
{
try
{
cs.submit(
new Callable<User>()
{
public User call()
{
int userId = eligibleUser.getUserId();
List<EmployeeTable2> empData = db.findByUserId(userId);
EmployeeTable2 emp = null;
if (null != empData && !empData.isEmpty())
{
emp = empData.get(0);
}else{
String errorMsg = "No record found for given User ID in emp table";
logger.error(errorMsg);
//continue;
// conitnue does not work here.
}
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}
}
);
userCount++;
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
for (int i = 0; i < userCount ; i++ ) {
try {
User user = cs.take().get();
if (user != null) {
userIdMap.put(user.getUserId(), user);
}
} catch (ExecutionException e) {
} catch (InterruptedException e) {
}
}
executor.shutdown();
map.put(key, userIdMap);
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
}
catch(Exception ex){
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
logger.info("Size of Map : " + map.size());
Set<String> periods = map.keySet();
logger.info("Size of periods : " + periods.size());
for(String period :periods)
{
Map<Integer, User> mapOfuserIds = map.get(period);
Set<Integer> userIds = mapOfuserIds.keySet();
logger.info("Size of Set : " + userIds.size());
for(Integer userId : userIds){
User inf = mapOfuserIds.get(userId);
logger.info("User Id : " + inf.getUserId());
}
}
long endTime = System.currentTimeMillis();
long timeTaken = (endTime - startTime);
logger.info("All threads are completed in " + timeTaken + " milisecond");
logger.info("******END******");
}
You really don't want to create a thread pool with as many threads as users you've read from the db. That doesn't make sense most of the time because you need to keep in mind that threads need to run somewhere... There are not many servers out there with 10 or 100 or even 1000 cores reserved for your application. A much smaller value like maybe 5 is often enough, depending on your environment.
And as always for topics about performance: You first need to test what your actual bottleneck is. Your application may simply don't benefit of threading because e.g. you are reading form a db which only allows 5 concurrent connections a the same time. In that case all your other 995 threads will simply wait.
Some other thing to consider is network latency: Reading multiple user ids from multiple threads may even increase the round trip time needed to get the data for one user from the database. An alternative approach might be to not read one user at a time, but the data of all 10'000 of them at once. That way your maybe available 10 GBit Ethernet connection to your database might really speed things up because you have only small communication overhead with the database but it might serve you all data you need in one answer quickly.
So in short, from my opinion your question is about performance optimization of your problem in general, but you don't know enough yet to decide which way to go.
you could try something like that:
List<String> periods = db.getPeriods(todayDate);
Map<String, Map<Integer, User>> hm = new HashMap<>();
periods.parallelStream().forEach(s -> {
eligibleUsers = // getEligibleUsers();
hm.put(s, eligibleUsers.parallelStream().collect(
Collectors.toMap(UserTable1::getId,createUserForId(UserTable1:getId))
});
); //
And in the createUserForId you do your db-reading
private User createUserForId(Integer id){
db.findByUserId(id);
//...
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}