In Apache Flink, I am not able to see the output in std out, but my job is running successfully and data is coming
As you are running your job on a cluster, DataStreams are printed to the stdout of the TaskManager process. This TaskManager stdout is directed to an .out file in the ./log/ directory of the Flink root directory. I believe this is here you have seen your output.
I don't know if it is possible to change the stdout of TaskManagers, however, a quick and dirty solution could be to write the output to a socket :
output.writeToSocket(outputHost, outputPort, new SimpleStringSchema())
public static void main(String[] args) throws Exception {
// the host and the port to connect to
final String hostname = "192.168.1.73";
final int port = 9000;
final StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("192.168.1.68", 6123);
// get input data by connecting to the socket
DataStream<String> text = env.socketTextStream(hostname, port, "\n");
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
public void flatMap(String value, Collector<WordWithCount> out) {
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word").timeWindow(Time.seconds(5))
.reduce(new ReduceFunction<WordWithCount>() {
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
env.execute("Socket Window WordCount");
}
public static class WordWithCount {
public String word;
public long count;
public WordWithCount() {
}
public WordWithCount(String word, long count) {
this.word = word;
this.count = count;
}
#Override
public String toString() {
return word + " : " + count;
}
}
Related
I am monitoring several (about 15) paths for incoming files using the Apache Commons FileAlterationMonitor. These incoming files can come in batches of anywhere between 1 and 500 files at a time. I have everything set up and the application monitors the folders as expected, I have it set to poll the folders every minute. My issue is that, as expected, the listener that I have set up alerts for each incoming file when all I really need, and want, is to know when a new batch of files come in. So I would like to receive a single alert as opposed to up to 500 at a time.
Does anyone have any ideas for how to control the number of alerts or only pick up the first or last notification or something to that effect? I would like to stick with the FileAlterationMonitor if at all possible because it will be running for long periods and so far from what I can tell in testing is that it doesn't seem to put a heavy load on the system or slow the rest of the application down. But I am definitely open to other ideas if what I'm looking for isn't possible with the FileAlterationMonitor.
public class FileMonitor{
private final String newDirectory;
private FileAlterationMonitor monitor;
private final Alerts gui;
private final String provider;
public FileMonitor (String d, Alerts g, String pro) throws Exception{
newDirectory = d;
gui = g;
provider = pro;
}
public void startMonitor() throws Exception{
// Directory to monitor
final File directory = new File(newDirectory);
// create new observer
FileAlterationObserver fao = new FileAlterationObserver(directory);
// add listener to observer
fao.addListener(new FileAlterationListenerImpl(gui, provider));
// wait 1 minute between folder polls.
monitor = new FileAlterationMonitor(60000);
monitor.addObserver(fao);
monitor.start();
}
}
public class FileAlterationListenerImpl implements FileAlterationListener{
private final Alerts gui;
private final String provider;
private final LogFiles monitorLogs;
public FileAlterationListenerImpl(Alerts g, String pro){
gui = g;
provider = pro;
monitorLogs = new LogFiles();
}
#Override
public void onStart(final FileAlterationObserver observer){
System.out.println("The FileListener has started on: " + observer.getDirectory().getAbsolutePath());
}
#Override
public void onDirectoryCreate(File file) {
}
#Override
public void onDirectoryChange(File file) {
}
#Override
public void onDirectoryDelete(File file) {
}
#Override
public void onFileCreate(File file) {
try{
switch (provider){
case "Spectrum": gui.alertsAreaAppend("New/Updated schedules available for Spectrum zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for Spectrum zones!\r\n");
break;
case "DirecTV ZTA": gui.alertsAreaAppend("New/Updated schedules available for DirecTV ZTA zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for DirecTV ZTA zones!\r\n");
break;
case "DirecTV RSN": gui.alertsAreaAppend("New/Updated schedules available for DirecTV RSN zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for DirecTV RSN zones!\r\n");
break;
case "Suddenlink": gui.alertsAreaAppend("New/Updated schedules available for Suddenlink zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for Suddenlink zones!\r\n");
break;
}
}catch (IOException e){}
}
#Override
public void onFileChange(File file) {
}
Above is the FileMonitor class and overridden FileAlterationListener I have so far.
Any suggestions would be greatly appreciated.
Here's a quick and crude implementation:
public class FileAlterationListenerAlterThrottler {
private static final int DEFAULT_THRESHOLD_MS = 5000;
private final int thresholdMs;
private final Map<String, Long> providerLastFileProcessedAt = new HashMap<>();
public FileAlterationListenerAlterThrottler() {
this(DEFAULT_THRESHOLD_MS);
}
public FileAlterationListenerAlterThrottler(int thresholdMs) {
this.thresholdMs = thresholdMs;
}
public synchronized boolean shouldAlertFor(String provider) {
long now = System.currentTimeMillis();
long last = providerLastFileProcessedAt.computeIfAbsent(provider, x -> 0l);
if (now - last < thresholdMs) {
return false;
}
providerLastFileProcessedAt.put(provider, now);
return true;
}
}
And a quicker and cruder driver:
public class Test {
public static void main(String[] args) throws Exception {
int myThreshold = 1000;
FileAlterationListenerAlterThrottler throttler = new FileAlterationListenerAlterThrottler(myThreshold);
for (int i = 0; i < 3; i++) {
doIt(throttler);
}
Thread.sleep(1500);
doIt(throttler);
}
private static void doIt(FileAlterationListenerAlterThrottler throttler) {
boolean shouldAlert = throttler.shouldAlertFor("Some Provider");
System.out.println("Time now: " + System.currentTimeMillis());
System.out.println("Should alert? " + shouldAlert);
System.out.println();
}
}
Yields:
Time now: 1553739126557
Should alert? true
Time now: 1553739126557
Should alert? false
Time now: 1553739126557
Should alert? false
Time now: 1553739128058
Should alert? true
I want to use Netflix-Ribbon as TCP client load balancer without Spring Cloud,and i write test code.
public class App implements Runnable
{
public static String msg = "hello world";
public BaseLoadBalancer lb;
public RxClient<ByteBuf, ByteBuf > client;
public Server echo;
App(){
lb = new BaseLoadBalancer();
echo = new Server("localhost", 8000);
lb.setServersList(Lists.newArrayList(echo));
DefaultClientConfigImpl impl = DefaultClientConfigImpl.getClientConfigWithDefaultValues();
client = RibbonTransport.newTcpClient(lb, impl);
}
public static void main( String[] args ) throws Exception
{
for( int i = 40; i > 0; i--)
{
Thread t = new Thread(new App());
t.start();
t.join();
}
System.out.println("Main thread is finished");
}
public String sendAndRecvByRibbon(final String data)
{
String response = "";
try {
response = client.connect().flatMap(new Func1<ObservableConnection<ByteBuf, ByteBuf>,
Observable<ByteBuf>>() {
public Observable<ByteBuf> call(ObservableConnection<ByteBuf, ByteBuf> connection) {
connection.writeStringAndFlush(data);
return connection.getInput();
}
}).timeout(1, TimeUnit.SECONDS).retry(1).take(1)
.map(new Func1<ByteBuf, String>() {
public String call(ByteBuf ByteBuf) {
return ByteBuf.toString(Charset.defaultCharset());
}
})
.toBlocking()
.first();
}
catch (Exception e) {
System.out.println(((LoadBalancingRxClientWithPoolOptions) client).getMaxConcurrentRequests());
System.out.println(lb.getLoadBalancerStats());
}
return response;
}
public void run() {
for (int i = 0; i < 200; i++) {
sendAndRecvByRibbon(msg);
}
}
}
i find it will create a new socket everytime i callsendAndRecvByRibbon even though the poolEnabled is setting to true. So,it confuse me,i miss something?
and there are no option to configure the size of the pool,but hava a PoolMaxThreads and MaxConnectionsPerHost.
My question is how to use a connection pool in my simple code, and what's wrong with my sendAndRecvByRibbon,it open a socket then use it only once,how can i reuse the connection?thanks for your time.
the server is just a simple echo server writing in pyhton3,i comment outconn.close() because i want to use long connection.
import socket
import threading
import time
import socketserver
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
conn = self.request
while True:
client_data = conn.recv(1024)
if not client_data:
time.sleep(5)
conn.sendall(client_data)
# conn.close()
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
if __name__ == "__main__":
HOST, PORT = "localhost", 8000
server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler)
ip, port = server.server_address
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
server.serve_forever()
and the pom of mevan,i just add two dependency in IED's auto generated POM.
<dependency>
<groupId>commons-configuration</groupId>
<artifactId>commons-configuration</artifactId>
<version>1.6</version>
</dependency>
<dependency>
<groupId>com.netflix.ribbon</groupId>
<artifactId>ribbon</artifactId>
<version>2.2.2</version>
</dependency>
the code for printing src_port
#Sharable
public class InHandle extends ChannelInboundHandlerAdapter {
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println(ctx.channel().localAddress());
super.channelRead(ctx, msg);
}
}
public class Pipeline implements PipelineConfigurator<ByteBuf, ByteBuf> {
public InHandle handler;
Pipeline() {
handler = new InHandle();
}
public void configureNewPipeline(ChannelPipeline pipeline) {
pipeline.addFirst(handler);
}
}
and change the client = RibbonTransport.newTcpClient(lb, impl);to Pipeline pipe = new Pipeline();client = RibbonTransport.newTcpClient(lb, pipe, impl, new DefaultLoadBalancerRetryHandler(impl));
So, your App() constructor does the initialization of lb/client/etc.
Then you're starting 40 different threads with 40 different RxClient instances (each instance has own pool by default) by calling new App() in the first for loop. To make things clear - the way you spawn multiple RxClient instances here does not allow them to share any common pool. Try to use one RxClient instance instead.
What if you change your main method like below, does it stop creating extra sockets?
public static void main( String[] args ) throws Exception
{
App app = new App() // Create things just once
for( int i = 40; i > 0; i--)
{
Thread t = new Thread(()->app.run()); // pass the run()
t.start();
t.join();
}
System.out.println("Main thread is finished");
}
If above does not help fully (at least it will reduce created sockets count in 40 times) - can you please clarify how exactly do you determine that:
i find it will create a new socket everytime i call sendAndRecvByRibbon
and what are your measurements after you update constructor with this line:
DefaultClientConfigImpl impl = DefaultClientConfigImpl.getClientConfigWithDefaultValues();
impl.set(CommonClientConfigKey.PoolMaxThreads,1); //Add this one and test
Update
Yes, looking at the sendAndRecvByRibbon it seems that it lacks marking the PooledConnection as no longer acquired by calling close once you don't expect any further reads from it.
As long as you expect the only single read event, just change this line
connection.getInput()
to the
return connection.getInput().zipWith(Observable.just(connection), new Func2<ByteBuf, ObservableConnection<ByteBuf, ByteBuf>, ByteBuf>() {
#Override
public ByteBuf call(ByteBuf byteBuf, ObservableConnection<ByteBuf, ByteBuf> conn) {
conn.close();
return byteBuf;
}
});
Note, that if you'd design more complex protocol over TCP, then input bytebuf can be analyzed for your specific 'end of communication' sign which indicates the connection can be returned to the pool.
Given a directory, my application traverses and loads .mdb MS Access dbs using the Jackcess API. Inside of each database, there is a table named GCMT_CMT_PROPERTIES with a column named cmt_data containing some text. I also have a Mapper object (which essentially resembles a Map<String,String> but allows duplicate keys) which I use as a dictionary when replacing a certain word from a string.
So for example if mapper contains fox -> dog then the sentence: "The fox jumps" becomes "The dog jumps".
The design I'm going with for this program is as follows:
1. Given a directory, traverse all subdirectories and load all .mdb files into a File[].
2. For each db file in File[], create a Task<Void> called "TaskMdbUpdater" and pass it the db file.
3. Dispatch and run each task as it is created (see 2. above).
TaskMdbUpdater is responsible for locating the appropriate table and column in the db file it was given and iteratively running a "find & replace" routine on each row of the table to detect words from the dictionary and replace them (as shown in example above) and finally updating that row before closing the db. Each instance of TaskMdbUpdater is a background thread with a Jackcess API DatabaseBuilder assigned to it, so it is able to manipulate the db.
In the current state, the code is running without throwing any exceptions whatsoever, however when I "manually" open the db through Access and inspect a given row, it appears to not have changed. I've tried to pin the source of the issue without any luck and would appreciate any support. If you need to see more code, let me know and I'll update my question accordingly.
public class TaskDatabaseTaskDispatcher extends Task<Void> {
private String parentDir;
private String dbFileFormat;
private Mapper mapper;
public TaskDatabaseTaskDispatcher(String parent, String dbFileFormat, Mapper mapper) {
this.parentDir = parent;
this.dbFileFormat = dbFileFormat;
this.mapper = mapper;
}
#Override
protected Void call() throws Exception {
File[] childDirs = getOnlyDirectories(getDirectoryChildFiles(new File(this.parentDir)));
DatabaseBuilder[] dbs = loadDatabasesInParent(childDirs);
Controller.dprint("TaskDatabaseTaskDispatcher", dbs.length + " databases were found in parent directory");
TaskMdbUpdater[] tasks = new TaskMdbUpdater[dbs.length];
Thread[] workers = new Thread[dbs.length];
for(int i=0; i<dbs.length; i++) {
// for each db, dispatch Task so a worker can update that db.
tasks[i] = new TaskMdbUpdater(dbs[i], mapper);
workers[i] = new Thread(tasks[i]);
workers[i].setDaemon(true);
workers[i].start();
}
return null;
}
private DatabaseBuilder[] loadDatabasesInParent(File[] childDirs) throws IOException {
DatabaseBuilder[] dbs = new DatabaseBuilder[childDirs.length];
// Traverse children and load dbs[]
for(int i=0; i<childDirs.length; i++) {
File dbFile = FileUtils.getFileInDirectory(
childDirs[i].getCanonicalFile(),
childDirs[i].getName() + this.dbFileFormat);
dbs[i] = new DatabaseBuilder(dbFile);
}
return dbs;
}
}
// StringUtils class, utility methods
public class StringUtils {
public static String findAndReplace(String str, Mapper mapper) {
String updatedStr = str;
for(int i=0; i<mapper.getMappings().size(); i++) {
updatedStr = updatedStr.replaceAll(mapper.getMappings().get(i).getKey(), mapper.getMappings().get(i).getValue());
}
return updatedStr;
}
}
// FileUtils class, utility methods:
public class FileUtils {
/**
* Returns only directories in given File[].
* #param list
* #return
*/
public static File[] getOnlyDirectories(File[] list) throws IOException, NullPointerException {
List<File> filteredList = new ArrayList<>();
for(int i=0; i<list.length; i++) {
if(list[i].isDirectory()) {
filteredList.add(list[i]);
}
}
File[] correctSizeFilteredList = new File[filteredList.size()];
for(int i=0; i<filteredList.size(); i++) {
correctSizeFilteredList[i] = filteredList.get(i);
}
return correctSizeFilteredList;
}
/**
* Returns a File[] containing all children under specified parent file.
* #param parent
* #return
*/
public static File[] getDirectoryChildFiles(File parent) {
return parent.listFiles();
}
}
public class Mapper {
private List<aMap> mappings;
public Mapper(List<aMap> mappings) {
this.mappings = mappings;
}
/**
* Returns mapping dictionary, typically used for extracting individual mappings.
* #return List of type aMap
*/
public List<aMap> getMappings() {
return mappings;
}
public void setMappings(List<aMap> mappings) {
this.mappings = mappings;
}
}
/**
* Represents a single String based K -> V mapping.
*/
public class aMap {
private String[] mapping; // [0] - key, [1] - value
public aMap(String[] mapping) {
this.mapping = mapping;
}
public String getKey() {
return mapping[0];
}
public String getValue() {
return mapping[1];
}
public String[] getMapping() {
return mapping;
}
public void setMapping(String[] mapping) {
this.mapping = mapping;
}
}
Update 1:
To verify my custom StringUtils.findAndReplace logic, I've performed the following unit test (in JUnit) which is passing:
#Test
public void simpleReplacementTest() {
// Construct a test mapper/dictionary
List<aMap> aMaps = new ArrayList<aMap>();
aMaps.add(new aMap(new String[] {"fox", "dog"})); // {K, V} = K -> V
Mapper mapper = new Mapper(aMaps);
// Perform replacement
String corpus = "The fox jumps";
String updatedCorpus = StringUtils.findAndReplace(corpus, mapper);
assertEquals("The dog jumps", updatedCorpus);
}
I'm including my TaskMdbUpdater class here separately with some logging code included, as I suspect point of failure lies somewhere in call:
/**
* Updates a given .mdb database according to specifications defined internally.
* #since 2.2
*/
public class TaskMdbUpdater extends Task<Void> {
private final String TABLE_NAME = "GCMT_CMT_PROPERTIES";
private final String COLUMN_NAME = "cmt_data";
private DatabaseBuilder dbPackage;
private Mapper mapper;
public TaskMdbUpdater(DatabaseBuilder dbPack, Mapper mapper) {
super();
this.dbPackage = dbPack;
this.mapper = mapper;
}
#Override
protected Void call() {
try {
// Controller.dprint("TaskMdbUpdater", "Worker: " + Thread.currentThread().getName() + " running");
// Open db and extract Table
Database db = this.dbPackage
.open();
Logger.debug("Opened database: {}", db.getFile().getName());
Table table = db.getTable(TABLE_NAME);
Logger.debug("Opening table: {}", table.getName());
Iterator<Row> tableRows = table.iterator();
// Controller.dprint("TaskMdbUpdater", "Updating database: " + db.getFile().getName());
int i=0;
try {
while( tableRows.hasNext() ) {
// Row is basically a<code> Map<Column_Name, Value> </code>
Row cRow = tableRows.next();
Logger.trace("Current row: {}", cRow);
// Controller.dprint(Thread.currentThread().getName(), "Database name: " + db.getFile().getName());
// Controller.dprint("TaskMdbUpdater", "existing row: " + cRow.toString());
String str = cRow.getString(COLUMN_NAME);
Logger.trace("Row {} column field contents (before find/replace): {}", i, str);
String newStr = performFindAndReplaceOnString(str);
Logger.trace("Row {} column field contents (after find/replace): {}", i, newStr);
cRow.put(COLUMN_NAME, newStr);
Logger.debug("Updating field in row {}", i);
Row newRow = table.updateRow(cRow); // <code>updateRow</code> returns the new, updated row. Ignoring this.
Logger.debug("Calling updateRow on table with modified row");
// Controller.dprint("TaskMdbUpdater", "new row: " + newRow.toString());
i++;
Logger.trace("i = {}", i);
}
} catch(NoSuchElementException e) {
// e.printStackTrace();
Logger.error("Thread has iterated past number of rows in table", e);
}
Logger.info("Iterated through {} rows in table {}", i, table.getName());
db.close();
Logger.debug("Closing database: {}", db.getFile().getName());
} catch (Exception e) {
// e.printStackTrace();
Logger.error("An error occurred while attempting to update row value", e);
}
return null;
}
/**
* #see javafx.concurrent.Task#failed()
*/
#Override
protected void failed() {
super.failed();
Logger.error("Task failed");
}
#Override
protected void succeeded() {
Logger.debug("Task succeeded");
}
private String performFindAndReplaceOnString(String str) {
// Logger.trace("OLD: [" + str + "]");
String updatedStr = null;
for(int i=0; i<mapper.getMappings().size(); i++) {
// loop through all parameter names in mapper to search for in str.
updatedStr = findAndReplace(str, this.mapper);
}
// Logger.trace("NEW: [" + updatedStr + "]");
return updatedStr;
}
}
Here's a small exerept from my log. As you can see, it doesn't seem to do anything after opening the table which has left me a bit perplexed:
INFO (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Located the following directories under specified MOIS parent which contains an .mdb file:
[01_Parent_All_Safe_Test[ RV_DMS_0041RV_DMS_0001RV_DMS_0003RV_DMS_0005RV_DMS_0007RV_DMS_0012RV_DMS_0013RV_DMS_0014RV_DMS_0016RV_DMS_0017RV_DMS_0018RV_DMS_0020RV_DMS_0023RV_DMS_0025RV_DMS_0028RV_DMS_0029RV_DMS_0031RV_DMS_0033RV_DMS_0034RV_DMS_0035RV_DMS_0036RV_DMS_0038RV_DMS_0039RV_DMS_0040 ]]
...
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Created new task: NAMEMAP.logic.TaskMdbUpdater#4cfe46fe
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Created new worker: Thread[Thread-22,5,main]
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Set worker Thread[Thread-22,5,main] as daemon
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Dispatching worker: Thread[Thread-22,5,main]
...
DEBUG (16-02-2017 17:28:00) [Thread-22] NAMEMAP.logic.TaskMdbUpdater.call(): Opened database: RV_DMS_0023.mdb
DEBUG (16-02-2017 17:28:00) [Thread-22] NAMEMAP.logic.TaskMdbUpdater.call(): Opening table: GCMT_CMT_PROPERTIES
After this point, there isn't any more entries entries in the log and the processor spikes at 100% load, remaining that way until I force kill the application. This could mean the program gets stuck in an infinite while loop - however if that were to be the case then shouldn't there be log entries in the file?
Update 2
Okay I've further narrowed the problem by printing log TRACE into stdio. It seems that my performFindAndReplaceOnString is super inefficient and it never gets past the first row of these dbs because it's just grinding away at the long string. Any suggestions on how I can efficiently perform a string replacement for this use case?
I am trying to implement the guaranteed message processing but the ack or fail methods on the Spout are not being called.
I am passing the a message ID object with the spout.
I am passing the tuple with each bolt and calling collector.ack(tuple) in each bolt.
Question
The ack or fail is not being called and I cannot work out why?
Here is a shortened code sample.
Spout Code using BaseRichSpout
public void nextTuple() {
for( String usage : usageData ) {
.... further code ....
String msgID = UUID.randomUUID().toString()
+ System.currentTimeMillis();
Values value = new Values(splitUsage[0], splitUsage[1],
splitUsage[2], msgID);
outputCollector.emit(value, msgID);
}
}
#Override
public void ack(Object msgId) {
this.pendingTuples.remove(msgId);
LOG.info("Ack " + msgId);
}
#Override
public void fail(Object msgId) {
// Re-emit the tuple
LOG.info("Fail " + msgId);
this.outputCollector.emit(this.pendingTuples.get(msgId), msgId);
}
Bolt Code using BaseRichBolt
#Override
public void execute(Tuple inputTuple) {
this.outputCollector.emit(inputTuple, new Values(serverData, msgId));
this.outputCollector.ack(inputTuple);
}
Final Bolt
#Override
public void execute(Tuple inputTuple) {
..... Simply reports does not emit .....
this.outputCollector.ack(inputTuple);
}
The reason the ack did not work was the use of the for loop in the spout. Changed this to a counter loop version below the emit and it works.
Example
index++;
if (index >= dataset.size()) {
index = 0;
}
Further to this thanks to the mailing list info.
Its because the Spout runs on a single thread and will block in a for loop, as next tuple will not return therefore it will never be able to call ACK method.
I am using Java logging to log the memory static in my file and use java.util.logging.FileHandler to implement rotating log. Now I have a situation where my manager wants to keep the initial logging file and rotate the rest of the file. Is there any way I can keep the initial log file but yet rotate the rest of the file.
public class TopProcessor extends Handler {
Handler handler;
public TopProcessor() throws IOException{
File dir = new File(System.getProperty("user.home"), "logs");
dir.mkdirs();
File fileDir = new File(dir,"metrics");
fileDir.mkdirs();
String pattern = "metrics-log-%g.json";
int count = 5;
int limit = 500000;
handler = new TopProcessorHandler(fileDir.getAbsolutePath()+File.separator+pattern, limit, count);
}
class TopProcessorHandler extends FileHandler{
public TopProcessorHandler(String pattern, int limit, int count)
throws IOException {
super(pattern, limit, count);
}
}
private void writeInformationToFile(String information) {
handler.publish(new LogRecord(Level.ALL, information));
}
#Override
public void close() {
handler.close();
}
#Override
public void flush() {
handler.flush();
}
#Override
public void publish(LogRecord record) {
handler.publish(record);
}
}
Create 2 files one initial log file and other rotating log file..You can merge two files when you want to read logs