I am trying to figure out how to get time-based streaming but on an infinite stream. The reason is pretty simple: Web Service call latency results per unit time.
But, that would mean I would have to terminate the stream (as I currently understand it) and that's not what I want.
In words: If 10 WS calls came in during a 1 minute interval, I want a list/stream of their latency results (in order) passed to stream processing. But obviously, I hope to get more WS calls at which time I would want to invoke the processors again.
I could totally be misunderstanding this. I had thought of using Collectors.groupBy(x -> someTimeGrouping) (so all calls are grouped by whatever measurement interval I chose. But then no code will be aware of this until I call a closing function as which point the monitoring process is done.
Just trying to learn java 8 through application to previous code
By definition and construction a stream can only be consumed once, so if you send your results to an inifinite streams, you will not be able to access them more than once. Based on your description, it looks like it would make more sense to store the latency results in a collection, say an ArrayList, and when you need to analyse the data use the stream functionality to group them.
Related
I have been looking for answer to this from one week, couldn't find anything relatable. Finally decided to post here.
I have use-case, where I need to give custom timeouts to different API calls. This use-case sounds very common. right ? well, I want to achieve this without using any extra threads. I am looking for system-clock described as given below.
So basically, I want to write one method ( calling it, EnforceTimeout() ), This method takes callable ( API call converted into callable format, which returns response, or exception ), and timeout in Miliseconds.
public static Object EnforceTimeout(Callable callable, Long TimeoutInMS) throws exceptions {
// Do Some Steps on current thread, but not create new thread/thread-pool
// 1. Start the clock
// 2. Make API call
// 3. Clock runs in background which takes care of the timeout, and if API call exceeds the time-limit then automatically enforce the exception.
}
Now, Some of you might have doubt, how can we keep track of elapsed time, without creating new thread. So here, let me describe a strategy like an Event-loop( in JavaScript ). We can define a system-clock. This clock should be able to look after 10 to 100 such callable's timeouts. It can check on such callable on priority queue (whichever callable has closest ending time), whether we have crossed the time limit.
Your next argument would be, one such system-clock instance would be inefficient to manage large number of callables. In that case, We need system-clock-manager, which will manage, how many such a clocks we will need, it should be able to handle the scaling of such system clock instances.
Please, let me know, if anything as such possible in java. If my question/idea is duplicate, pls guide me to the discussion, where I can find more information about the same. Thank you very much.
I read that:
When processing is complete in your Tasklet implementation, you return
an org.springframework.batch.repeat.RepeatStatus object. There are two
options with this: RepeatStatus.CONTINUABLE and RepeatStatus.FINISHED.
These two values can be confusing at first glance. If you return
RepeatStatus.CONTINUABLE, you aren't saying that the job can continue.
You're telling Spring Batch to run the tasklet again. Say, for
example, that you wanted to execute a particular tasklet in a loop
until a given condition was met, yet you still wanted to use Spring
Batch to keep track of how many times the tasklet was executed,
transactions, and so on. Your tasklet could return
RepeatStatus.CONTINUABLE until the condition was met. If you return
RepeatStatus.FINISHED, that means the processing for this tasklet is
complete (regardless of success) and to continue with the next piece
of processing.
But I can't imagine example of using this feature. Could you explain it for me ? When the next time tasklet will be invoked ?
Let's say that you have a large set of items (for example files), and you need to enrich each one of them in some way, which requires consuming an external service. The external service might provide a chunked mode that can process up to 1000 requests at once instead of making a separate remote call for each single file. That might be the only way you can bring down your overall processing time to the required level.
However, this is not possible to implement using Spring Batch's Reader/Processor/Writer API in a nice way, because the Processor is fed item by item and not entire chunks of them. Only the Writer actually sees chunks of items.
You could implement this using a Tasklet that reads the next up to 1000 unprocessed files, sends a chunked request to the service, processes the results, writes output files and deletes or moves the processed files.
Finally it checks if there are more unprocessed files left. Depending on that it returns either FINISHED or CONTINUABLE, in which case the framework would invoke the Tasklet again to process the next up to 1000 files.
This is actually a quite realistic scenario, so I hope that illustrates the purpose of the feature.
This allows you to break up processing of a complex task across multiple iterations.
The functionality is similar to a while(true) loop with continue/break.
I have a webservice ABC
ABC Operations:
A. Call XYZ web service
B. Store response in db
C. return result
Overall ABC Responce time = 18 sec
XYZ Response Time = 8 sec.
Only ABC Response time = 18-8 = 10 sec
I want to minimize response time of ABC service.
How can this be done?
Few things I though:
1.Send part request and get part response = But its not possible in my case.
2. return response and perform db in asynchronous manner. (Can this be done in reliable manner?)
3. Is there any way to improve the db write operation?
If it is possible to “”perform db in asynchronous manner’’ i.e. if you can respond to the caller before the DB write completes then you can use the ‘write behind’ pattern to perform the DB writes asynchronously.
The write behind pattern looks like this: queue each data change, let this queue be subject to a configurable duration (aka the “write behind delay”) and a maximum size. When data changes, it is added to the write-behind queue (if it is not already in the queue) and it is written to the underlying store whenever one of the following conditions is met:
The write behind delay expires
The queue exceeds a configurable size
The system enters shutdown mode and you want to ensure that no data is lost
There is plenty of prior art in this space. For example, Spring’s Cache Abstraction allows you to add a caching layer and it supports JSR-107 compliant caches such as Ehcache 3.x which provides a write behind cache writer. Spring’s caching service is an abstraction not an implementation, the idea being that it will look after the caching logic for you while you continue to provide the store and the code to interact with the store.
You should also look at whatever else is happening inside ABC, other than the call to XYZ, if the DB call accounts for all of those extra 10s then ‘write behind’ will save you ~10s but if there are other activities happening in those 10s then you’ll need to address those separately. The key point here is to profile the calls inside ABC so that you can identify exactly where time is spent and then prioritise each phase according to factors such as (a) how long that phase takes; (b) how easily that time can be reduced.
If you move to a ‘write behind’ approach then the elapsed time of the DB is no longer an issue for your caller but it might still be an issue within ABC since long write times could cause the queue of ‘write behind’ instructions to build up. In that case, you would profile the DB call to understand why it is taking so long. Common candidates include: attempting to write large data items (e.g. a large denormalised data item), attempting to write into a table/store which is heavily indexed.
As far as I know you can follow the options based on your requirement:
Think of caching the results from XYZ response and store to database so that you can minimise the call.
There could be possibility of failures in option 2 but still you can fix it by writing the failure cases to error log and process it later.
DB write operation can be improved with proper indexing, normalisation etc..
I read a huge File (almost 5 million lines). Each line contains Date and a Request, I must parse Requests between concrete **Date**s. I use BufferedReader for reading File till start Date and than start parse lines. Can I use Threads for parsing lines, because it takes a lot of time?
It isn't entirely clear from your question, but it sounds like you are reparsing your 5 million-line file every time a client requests data. You certainly can solve the problem by throwing more threads and more CPU cores at it, but a better solution would be to improve the efficiency of your application by eliminating duplicate work.
If this is the case, you should redesign your application to avoid reparsing the entire file on every request. Ideally you should store data in a database or in-memory instead of processing a flat text file on every request. Then on a request, look up the information in the database or in-memory data structure.
If you cannot eliminate the 5 million-line file entirely, you can periodically recheck the large file for changes, skip/seek to the end of the last record that was parsed, then parse only new records and update the database or in-memory data structure. This can all optionally be done in a separate thread.
Firstly, 5 million lines of 1000 characters is only 5Gb, which is not necessarily prohibitive for a JVM. If this is actually a critical use case with lots of hits then buying more memory is almost certainly the right thing to do.
Secondly, if that is not possible, most likely the right thing to do is to build an ordered Map based on the date. So every date is a key in the map and points to a list of line numbers which contain the requests. You can then go direct to the relevant line numbers.
Something of the form
HashMap<Date, ArrayList<String>> ()
would do nicely. That should have a memory usage of order 5,000,000*32/8 bytes = 20Mb, which should be fine.
You could also use the FileChannel class to keep the I/O handle open as you go jumping from on line to a different line. This allows Memory Mapping.
See http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileChannel.html
And http://en.wikipedia.org/wiki/Memory-mapped_file
A good way to parallelize a lot of small tasks is to wrap the processing of each task with a FutureTask and then pass each task to a ThreadPoolExecutor to run them. The executor should be initalized with the number of CPU cores your system has available.
When you call executor.execute(future), the future will be queued for background processing. To avoid creating and destroying too many threads, the ScheduledThreadPoolExecutor will only create as many threads as you specified and execute the futures one after another.
To retrieve the result of a future, call future.get(). When the future hasn't completed yet (or wasn't even started yet), this method will freeze until it is completed. But other futures get executed in background while you wait.
Remember to call executor.shutdown() when you don't need it anymore, to make sure it terminates the background threads it otherwise keeps around until the keepalive time has expired or it is garbage-collected.
tl;dr pseudocode:
create executor
for each line in file
create new FutureTask which parses that line
pass future task to executor
add future task to a list
for each entry in task list
call entry.get() to retrieve result
executor.shutdown()
My application takes a lot of measurements of it's internal processes. For example I time certain methods, I time external webservice calls and I also have variables which have a changing value, and processes which have a 'state' (e.g. PAUSED, WAITING etc).
The application uses 100 to 200 threads, and each bit of data would be associated with a particular thread.
I am looking for some software that I can channel all this information into that would produce useful metrics and graphs of the data (ideally in real time or close to real time), let me set thresholds to trigger warnings, would allow me to filter the data by thread or thread group, etc etc.
The application is performing time critical tasks so the software/api would need to be very fast and never block.
The application is written in java, and ideally the software/api would be in java as well. I think what I'm looking for is called Event Stream Processing, but I'm really not sure what language to use to describe it.
All I've found so far are Esper and ERMA. Can anyone give me a recommendation? I'm the only one working on this project so I'm hoping for something that is pretty easy to set up and use, and has a workable front end.
In the end I found Graphite which was pretty close to being exactly what I wanted. Not the simplest to set up and configure however, but I got it working in the end.
http://graphite.wikidot.com/
In my case I send data directly from my application to Statsd (via UDP), which collects the data and does some pre processing before it ends up in the whisper back end, there is a simple example of a java interface here https://github.com/etsy/statsd/commit/2253223f3c19d2149d65ec5bc802198ff93da4cb
Alternatively you could send your data directly to graphite, example here http://neopatel.blogspot.co.uk/2011/04/logging-to-graphite-monitoring-tool.html