I've been reading up on Spark and am very interested in the ability to allocate computation across scalable compute clusters. We have production stream processing code (5K lines written in Java 9) which handles AMQP message processing, that we would like to run in a Spark cluster.
However, I feel like I must misunderstand the basic premise of Spark. On the one hand, it runs Java and we should be able to run our applications with it, but on the other hand it seems (from the documentation) that all code must be rewritten to the Spark API (using Dataframes/Datasets). Is this true? Can Java applications be used as-is with Spark, or must they be rewritten? This seems like a major limitation or rather a showstopper for us.
I think, ideally, we would want to use Spark to handle high level message routing (using the Structured Streaming API), which would hand off the message to our Java application to handle computation, database writes etc. The core part of our code is single class interface and Spark could map the message to that class instance. Hence, there would likely be many, many instances processing messages in parallel both within each machine instance and distributed across the cluster.
Am I missing something here?
for your question Can Java applications be used as-is with Spark, or must they be rewritten?
Yes, you have to rewrite the data interaction layer.
spark reads the source data in the form of rdd/dataframe, in your case its streaming Dataframes/Datasets.
Spark parallel processing/job scheduling is based on these dataset/dataframe
Dataframes/dataset is equivalent to an Array which is storing data on multiple nodes.
so if you have a logic in java that iterate a list and writes to file
conn=openFile(..)
Array[value].foreach{
value-> {
updatedValue=/**your business logic on the value**/
conn.write(updatedValue)
}
}
in spark you have to deal with the dataframe
dataframe[value].map{ value->
updatedValue =/**your business logic on the value**/ <-- reuse your logic here
}.saveToFile(/**file path**/)
hope you can see the difference, you can reuse your business logic,
but spark has to handle the dataflow either read/write(recommended).
I'm looking to disable collecting statistics for all sequences and mediators in WSO2 EI. I still want to collect statistics about service calls and what not but discard
unwanted statistics about sequences and mediators contained in those services (which is a lot of unnecessary data).
I'm aware that apart from enabling/disabling statistics for specific services, you can also disable statistics for specific sequences, which would also mean not collecting stats about mediators contained in those sequences. However, in our project some services only contain mediators and not sequences.
So far we've tried adding booleans into synapse.properties file
mediation.flow.statistics.collect.proxy=true
mediation.flow.statistics.collect.api=true
mediation.flow.statistics.collect.mediator=false
mediation.flow.statistics.collect.sequence=false
mediation.flow.statistics.collect.resource=true
mediation.flow.statistics.collect.endpoint=true
and editing reportEntryEvent() and
reportChildEntryEvent() methods in org.apache.synapse.aspects.flow.statistics.collectors.OpenEventCollector.java file. For example if incoming componentType is mediator, I quit the reportChildEntryEvent() method assuming it would stop the statistic collection process. However this logic doesn't seem to be correct as I still receive mediator statistics in my Stream Processor.
This statistic handling is probably being managed also somewhere else but I actually struggle to see where and what exactly in the wso2-synapse code should I edit to achieve this behavior.
Thanks for any reply.
There's a REST endpoint, which serves large (tens of gigabytes) chunks of data to my application.
Application processes the data in it's own pace, and as incoming data volumes grow, I'm starting to hit REST endpoint timeout.
Meaning, processing speed is less then network throughoutput.
Unfortunately, there's no way to raise processing speed enough, as there's no "enough" - incoming data volumes may grow indefinitely.
I'm thinking of a way to store incoming data locally before processing, in order to release REST endpoint connection before timeout occurs.
What I've came up so far, is downloading incoming data to a temporary file and reading (processing) said file simultaneously using OutputStream/InputStream.
Sort of buffering, using a file.
This brings it's own problems:
what if processing speed becomes faster then downloading speed for
some time and I get EOF?
file parser operates with
ObjectInputStream and it behaves weird in cases of empty file/EOF
and so on
Are there conventional ways to do such a thing?
Are there alternative solutions?
Please provide some guidance.
Upd:
I'd like to point out: http server is out of my control.
Consider it to be a vendor data provider. They have many consumers and refuse to alter anything for just one.
Looks like we're the only ones to use all of their data, as our client app processing speed is far greater than their sample client performance metrics. Still, we can not match our app performance with network throughoutput.
Server does not support http range requests or pagination.
There's no way to divide data in chunks to load, as there's no filtering attribute to guarantee that every chunk will be small enough.
Shortly: we can download all the data in a given time before timeout occurs, but can not process it.
Having an adapter between inputstream and outpustream, to pefrorm as a blocking queue, will help a ton.
You're using something like new ObjectInputStream(new FileInputStream(..._) and the solution for EOF could be wrapping the FileInputStream first in an WriterAwareStream which would block when hitting EOF as long a the writer is writing.
Anyway, in case latency don't matter much, I would not bother start processing before the download finished. Oftentimes, there isn't much you can do with an incomplete list of objects.
Maybe some memory-mapped-file-based queue like Chronicle-Queue may help you. It's faster than dealing with files directly and may be even simpler to use.
You could also implement a HugeBufferingInputStream internally using a queue, which reads from its input stream, and, in case it has a lot of data, it spits them out to disk. This may be a nice abstraction, completely hiding the buffering.
There's also FileBackedOutputStream in Guava, automatically switching from using memory to using a file when getting big, but I'm afraid, it's optimized for small sizes (with tens of gigabytes expected, there's no point of trying to use memory).
Are there alternative solutions?
If your consumer (the http client) is having trouble keeping up with the stream of data, you might want to look at a design where the client manages its own work in progress, pulling data from the server on demand.
RFC 7233 describes the Range Requests
devices with limited local storage might benefit from being able to request only a subset of a larger representation, such as a single page of a very large document, or the dimensions of an embedded image
HTTP Range requests on the MDN Web Docs site might be a more approachable introduction.
This is the sort of thing that queueing servers are made for. RabbitMQ, Kafka, Kinesis, any of those. Perhaps KStream would work. With everything you get from the HTTP server (given your constraint that it cannot be broken up into units of work), you could partition it into chunks of bytes of some reasonable size, maybe 1024kB. Your application would push/publish those records/messages to the topic/queue. They would all share some common series ID so you know which chunks match up, and each would need to carry an ordinal so they can be put back together in the right order; with a single Kafka partition you could probably rely upon offsets. You might publish a final record for that series with a "done" flag that would act as an EOF for whatever is consuming it. Of course, you'd send an HTTP response as soon as all the data is queued, though it may not necessarily be processed yet.
not sure if this would help in your case because you haven't mentioned what structure & format the data are coming to you in, however, i'll assume a beautifully normalised, deeply nested hierarchical xml (ie. pretty much the worst case for streaming, right? ... pega bix?)
i propose a partial solution that could allow you to sidestep the limitation of your not being able to control how your client interacts with the http data server -
deploy your own webserver, in whatever contemporary tech you please (which you do control) - your local server will sit in front of your locally cached copy of the data
periodically download the output of the webservice using a built-in http querying library, a commnd-line util such as aria2c curl wget et. al, an etl (or whatever you please) directly onto a local device-backed .xml file - this happens as often as it needs to
point your rest client to your own-hosted 127.0.0.1/modern_gigabyte_large/get... 'smart' server, instead of the old api.vendor.com/last_tested_on_megabytes/get... server
some thoughts:
you might need to refactor your data model to indicate that the xml webservice data that you and your clients are consuming was dated at the last successful run^ (ie. update this date when the next ingest process completes)
it would be theoretically possible for you to transform the underlying xml on the way through to better yield records in a streaming fashion to your webservice client (if you're not already doing this) but this would take effort - i could discuss this more if a sample of the data structure was provided
all of this work can run in parallel to your existing application, which continues on your last version of the successfully processed 'old data' until the next version 'new data' are available
^
in trade you will now need to manage a 'sliding window' of data files, where each 'result' is a specific instance of your app downloading the webservice data and storing it on disc, then successfully ingesting it into your model:
last (two?) good result(s) compressed (in my experience, gigabytes of xml packs down a helluva lot)
next pending/ provisional result while you're streaming to disc/ doing an integrity check/ ingesting data - (this becomes the current 'good' result, and the last 'good' result becomes the 'previous good' result)
if we assume that you're ingesting into a relational db, the current (and maybe previous) tables with the webservice data loaded into your app, and the next pending table
switching these around becomes a metadata operation, but now your database must store at least webservice data x2 (or x3 - whatever fits in your limitations)
... yes you don't need to do this, but you'll wish you did after something goes wrong :)
Looks like we're the only ones to use all of their data
this implies that there is some way for you to partition or limit the webservice feed - how are the other clients discriminating so as not to receive the full monty?
You can use in-memory caching techniques OR you can use Java 8 streams. Please see the following link for more info:
https://www.conductor.com/nightlight/using-java-8-streams-to-process-large-amounts-of-data/
Camel could maybe help you the regulate the network load between the REST producer and producer ?
You might for instance introduce a Camel endpoint acting as a proxy in front of the real REST endpoint, apply some throttling policy, before forwarding to the real endpoint:
from("http://localhost:8081/mywebserviceproxy")
.throttle(...)
.to("http://myserver.com:8080/myrealwebservice);
http://camel.apache.org/throttler.html
http://camel.apache.org/route-throttling-example.html
My 2 cents,
Bernard.
If you have enough memory, Maybe you can use in-memory data store like Redis.
When you get data from your Rest endpoint you can save your data into Redis list (or any other data structure which is appropriate for you).
Your consumer will consume data from the list.
Is there functionality built into Kafka Streams that allows for dynamically connecting a single input stream into multiple output streams? KStream.branch allows branching based on true/false predicates, but this isn't quite what I want. I'd like each incoming log to determine the topic it will be streamed to at runtime, e.g., a log {"date": "2017-01-01"} will be streamed to the topic topic-2017-01-01 and a log {"date": "2017-01-02"} will be streamed to the topic topic-2017-01-02.
I could call forEach on the stream, then write to a Kafka producer, but that doesn't seem very elegant. Is there a better way to do this within the Streams framework?
If you want to create topics dynamically based on your data, you do not get any support within Kafka's Streaming API at the moment (v0.10.2 and earlier). You will need to create a KafkaProducer and implement your dynamic "routing" by yourself (for example using KStream#foreach() or KStream#process()). Note, that you need to do synchronous writes to avoid data loss (which are not very performant unfortunately). There are plans to extend Streaming API with dynamic topic routing, but there is no concrete timeline for this feature right now.
There is one more consideration you should take into account. If you do not know your destination topic(s) ahead of time and just rely on the so-called "topic auto creation" feature, you should make sure that those topics are being created with the desired configuration settings (e.g., number of partitions or replication factor).
As an alternative to "topic auto creation" you can also use Admin Client (available since v0.10.1) to create topics with correct configuration. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
I am using ActiveMQ, Spring.
Is there any way by which I can keep track of all processed messages. I have to keep track what all messages has been processed. I also want to review these processed messages at later stage.
Should i use database for this?
Is there any good library that can make this operation easy
I do not want to make table in database for every kind of model object
In general, I would suggest that you either log and/or record the messages into the database. If you simply want to review the messages later, simple logging may suffice. If you need to do transactional rollup/searching through a UI, then the database is better.
However, you can also achieve what you want with ActiveMQ virtual destinations. With this, you can have 1 destination forward to 2 other destinations. Then your app could listen on 1 destination, and a copy of the message would sit on the other for your review. For example:
<broker persistent="false" useJmx="false" xmlns="http://activemq.apache.org/schema/core">
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="MY.QUEUE">
<forwardTo>
<queue physicalName="MY.QUEUE.PROCESS" />
<topic physicalName="MY.QUEUE.REVIEW" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
</broker>
Would define a queue MY.QUEUE where each message would end up in BOTH the .PROCESS and .REVIEW queues.
I would use a database.
Perhaps you could use an ORM such as Hibernate, but JDBC or SpringTemplates may be better.
Rather than making a separate table for each model object, make a 'message' table and serialize the uncommon portions into a payload blob (or text). You could then use a utility to deserialize the message for review (or playback) later.
Kartik,
This is a good programming question, but it's more of a what should I program question rather than a "How can I do this" question. It's hard to answer a "What should I program question" because what you should program depends directly on what you need. At best, we can only guess at what you really need.
If you need to update the processed JMS messages, then a database will make it easy to update. If you need to prove that nobody updated a "logged" entry, then a database might not do the job.
Let's say this log is used to see which very slow to process messages still need to complete. Then a database will provide easy searching, provided that the person searching knows SQL. However, if the log is more of an archive, then the database just adds overhead to the entire process, a structured file will do.
In Java there is JDBC for writing and retrieving to databases, and it is not a hard API to use. Then again, there is also a number of decent logging frameworks, and of course there is always FileOutputStream. Without knowing how this log is to be used, it is very difficult to determine which techniques are really overkill, likewise it's not possible to know which techniques are not quite enough.
Go back and review how the log is to be used, and then evaluate if the features that databases provide are overkill.
Cheers,
Ed