I'm wondering why the messages are being printed in exactly the same order as in the code.
import akka.actor.AbstractActor
import akka.actor.ActorRef
import akka.actor.ActorSystem
import akka.actor.Props
import akka.event.Logging
import akka.event.LoggingAdapter
data class Request(val name: String)
class Device : AbstractActor() {
val log = Logging.getLogger(getContext().getSystem(), this);
override fun createReceive(): Receive {
return receiveBuilder().matchEquals("print") { x -> log.info("hello i'm a device") }
.match(Request::class.java) { x -> log.info("A " + x.name) }
.build()
}
companion object {
fun props(): Props {
return Props.create { Device() }
}
}
}
fun main(args: Array<String>) {
val system = ActorSystem.create("container")
val deviceA = system.actorOf(Device.props())
val deviceC = system.actorOf(Device.props())
val deviceD = system.actorOf(Device.props())
val deviceB = system.actorOf(Device.props())
deviceA.tell(Request("first "), deviceB)
deviceA.tell(Request("second"), deviceC)
deviceA.tell(Request("third"), deviceD)
}
It prints out:
/usr/lib/jvm/java-8-oracle/bin/java -Dvisualvm.id=32047598721041 -javaagent:/opt/intellij-idea-community/lib/idea_rt.jar=41737:/opt/intellij-idea-community/bin -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/home/dell/akka-quickstart-java/target/classes:/home/dell/.m2/repository/com/typesafe/akka/akka-actor_2.12/2.5.19/akka-actor_2.12-2.5.19.jar:/home/dell/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/dell/.m2/repository/com/typesafe/config/1.3.3/config-1.3.3.jar:/home/dell/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.12/0.8.0/scala-java8-compat_2.12-0.8.0.jar:/home/dell/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib-jdk8/1.3.20/kotlin-stdlib-jdk8-1.3.20.jar:/home/dell/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib/1.3.20/kotlin-stdlib-1.3.20.jar:/home/dell/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib-common/1.3.20/kotlin-stdlib-common-1.3.20.jar:/home/dell/.m2/repository/org/jetbrains/annotations/13.0/annotations-13.0.jar:/home/dell/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib-jdk7/1.3.20/kotlin-stdlib-jdk7-1.3.20.jar StuffKt
[INFO] [01/31/2019 18:43:16.058] [container-akka.actor.default-dispatcher-2] [akka://container/user/$a] A first
[INFO] [01/31/2019 18:43:16.059] [container-akka.actor.default-dispatcher-2] [akka://container/user/$a] A second
[INFO] [01/31/2019 18:43:16.059] [container-akka.actor.default-dispatcher-2] [akka://container/user/$a] A third
I was expecting the order to be different sometimes from ("first second third"), but it keeps printing out that same order on every run.
Is my expectation incorrect?
The documentation on message delivery order states
The rule more specifically is that for a given pair of actors,
messages sent directly from the first to the second will not be
received out-of-order. The word directly emphasizes that this
guarantee only applies when sending with the tell operator to the
final destination, not when employing mediators or other message
dissemination features (unless stated otherwise).
You are sending from the main method (outside of the actor system) to deviceA.
You are not sending from the deviceB, deviceC, or deviceD actors. Those are just being used as sender references, so that deviceA has someone to reply to.
Related
I am new to ZeroMQ and seem to be losing messages in a loop in my begin() method.
I'm wondering if I am missing a piece where I am not queuing messages or something?
When I cause an event on my publisher, that sends two messages to my subscriber with a small gap in between, I seem not to be getting the second message that is relayed. What am I missing?
class ZMQSubscriber[T <: Transaction, B <: Block](
socket: InetSocketAddress,
hashTxListener: Option[HashDigest => Future[Unit]],
hashBlockListener: Option[HashDigest => Future[Unit]],
rawTxListener: Option[Transaction => Future[Unit]],
rawBlockListener: Option[Block => Future[Unit]]) {
private val logger = BitcoinSLogger.logger
def begin()(implicit ec: ExecutionContext) = {
val context = ZMQ.context(1)
// First, connect our subscriber socket
val subscriber = context.socket(ZMQ.SUB)
val uri = socket.getHostString + ":" + socket.getPort
//subscribe to the appropriate feed
hashTxListener.map { _ =>
subscriber.subscribe(HashTx.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to the transaction hashes from zmq")
}
rawTxListener.map { _ =>
subscriber.subscribe(RawTx.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to raw transactions from zmq")
}
hashBlockListener.map { _ =>
subscriber.subscribe(HashBlock.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to the hashblock stream from zmq")
}
rawBlockListener.map { _ =>
subscriber.subscribe(RawBlock.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to raw block")
}
subscriber.connect(uri)
subscriber.setRcvHWM(0)
logger.info("Connection to zmq client successful")
while (true) {
val notificationTypeStr = subscriber.recvStr(ZMQ.DONTWAIT)
val body = subscriber.recv(ZMQ.DONTWAIT)
Future(processMsg(notificationTypeStr, body))
}
}
private def processMsg(topic: String, body: Seq[Byte])(implicit ec: ExecutionContext): Future[Unit] = Future {
val notification = ZMQNotification.fromString(topic)
val res: Option[Future[Unit]] = notification.flatMap {
case HashTx =>
hashTxListener.map { f =>
val hash = Future(DoubleSha256Digest.fromBytes(body))
hash.flatMap(f(_))
}
case RawTx =>
rawTxListener.map { f =>
val tx = Future(Transaction.fromBytes(body))
tx.flatMap(f(_))
}
case HashBlock =>
hashBlockListener.map { f =>
val hash = Future(DoubleSha256Digest.fromBytes(body))
hash.flatMap(f(_))
}
case RawBlock =>
rawBlockListener.map { f =>
val block = Future(Block.fromBytes(body))
block.flatMap(f(_))
}
}
}
}
So this seems to have been solved by using a ZMsg.recvMsg() in the while-loop instead of
val notificationTypeStr = subscriber.recvStr(ZMQ.DONTWAIT)
val body = subscriber.recv(ZMQ.DONTWAIT)
I'm not sure why this works, but it does. So here is what my begin method looks like now
while (run) {
val zmsg = ZMsg.recvMsg(subscriber)
val notificationTypeStr = zmsg.pop().getString(ZMQ.CHARSET)
val body = zmsg.pop().getData
Future(processMsg(notificationTypeStr, body))
}
Future.successful(Unit)
}
What am I missing?
How the blocking v/s non-blocking modus operandi work :
The trick is in the (non-)blocking mode of the respective call to the .recv() method.
A second call to the subscriber.recv( ZMQ.DONTWAIT )-method thus returns immediately, so your second part, ( the body ) may and will legally contain nothing, even though your promise stated a pair of messages was indeed dispached from the publisher-side ( a pair of .send() method calls - one may also object, there are chances the sender was actually sending just one message, in a multi-part fashion - MCVE-code is not specific on this part ).
So, once you have moved your code from non-blocking mode ( in the O/P ) into a principally blocking-mode ( which locked / sync-ed the further flow of the code with the external event of an arrival of any plausibly formatted message, not returning earlier ), in:
val zmsg = ZMsg.recvMsg(subscriber) // which BLOCKS-till-a-1st-zmsg-arrived
both the further processed .pop()-ed parts just unload the components ( ref. the remark on actual ZMsg multi-part structure actually sent by the published-side, presented above )
Safety next :unlimited alloc-s v/s a mandatory blocking / dropping messages ?
the code surprised me on several points. Besides a rather very "late" call to the .connect()-method, compared to all the previous socket-archetype detailed settings ( that normally get arranged "after" a request to setup a connection ). While this may work fine, as intended, yet it exposes even tighter ( smaller ) time-window for the .Context()-instance to setup and (re-)negotiate all the relevant connection-details so as to become RTO.
One particular line attracted my attention: subscriber.setRcvHWM( 0 ) this is a version-archetype dependent trick. Yet, the value of zero causes an application to become vulnerable and I would not advise doing so in any production-grade application.
I need to run an aggregation Spark job using spark-jobserver using low-latency contexts. I have this Scala runner to run a job on using a Java method from a Java class.
object AggregationRunner extends SparkJob {
def main(args: Array[String]) {
val ctx = new SparkContext("local[4]", "spark-jobs")
val config = ConfigFactory.parseString("")
val results = runJob(ctx, config)
}
override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
SparkJobValid;
}
override def runJob(sc: SparkContext, config: Config): Any = {
val context = new JavaSparkContext(sc)
val aggJob = new ServerAggregationJob()
val id = config.getString("input.string").split(" ")(0)
val field = config.getString("input.string").split(" ")(1)
return aggJob.aggregate(context, id, field)
}
}
However, I get the following error. I tried taking out the content returned in the Java method and am now just returning a test string, but it still doesn't work:
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/single-context#1243999360]] after [10000 ms]",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)", "akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)", "scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)", "akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)", "akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)", "akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)", "java.lang.Thread.run(Thread.java:745)"]
}
}
I am not too sure why there is a timeout since I am only returning a string.
EDIT
So I figured out that the issue was occurring because I was using a Spark context that was created before updating a JAR. However, now that I try to use JavaSparkContext inside the Spark job, it returns to the error shown above.
What would be a permanent way to get rid of the error.
Also, would the fact that I am running a heavy Spark job on a local docker container be a plausible reason for the timeout.
For resolving ask time out issue, please add/change below properties in jobserver configuration file.
spray.can.server {
idle-timeout = 210 s
request-timeout = 200 s
}
for more information take a look at this https://github.com/spark-jobserver/spark-jobserver/blob/d1843cbca8e0d07f238cc664709e73bbeea05f2c/doc/troubleshooting.md
I have an actor who is routing messages to a group of other actors who act as wrappers on top of a volatile service. So far everythign is great, but I'd like to be able to control how many actors exist acting on this service (since they may represent socket connections or other physical properties) so beign able to manage scaling them would be nice.
I see that there is a remove routee method on the router and it does remove the routes, but is there a way to send a poison pill to my child actors first before they are removed? The docs say that a poison pill message should come through when removing the routee this way but I'm not seeing that happen.
I have code like this
final Collection<Routee> routees = JavaConversions.asJavaCollection(router.routees());
for (final Routee routee : routees.stream()
.limit(numberToRemove)
.collect(toList())) {
router = router.removeRoutee(routee);
}
So looks like I was missing the fact that I had to send a poison pill manually in order to stop my router. Here is a full scala demo app
import akka.actor._
import akka.routing._
case class Add()
case class Remove()
class Worker(id: Integer) extends UntypedActor {
println(s"Made worker $id")
#throws[Exception](classOf[Exception]) override
def preStart(): Unit = {
println(s"Starting $id")
}
#throws[Exception](classOf[Exception]) override
def postStop(): Unit = {
println(s"Stopping $id")
}
#throws[Exception](classOf[Exception])
override def onReceive(message: Any): Unit = message match {
case _ => println(s"Message received on actor $id")
}
}
class Master extends Actor {
var count = 0
def makeWorker() = {
val id = count
count = count + 1
context.actorOf(Props(new Worker(id)))
}
var activeWorkers = Seq.fill(2) {
makeWorker()
}
var router = Router(RoundRobinRoutingLogic(), activeWorkers.map(r => {
context watch r
ActorRefRoutee(r)
}).toIndexedSeq)
def receive = {
case Remove =>
println("Removing route")
val head = router.routees.head.asInstanceOf[ActorRefRoutee].ref
head ! PoisonPill
context unwatch head
router = router.removeRoutee(head)
printRoutes()
case Add =>
println("Adding route")
val worker = makeWorker()
context watch worker
router = router.addRoutee(worker)
printRoutes()
case w: AnyRef =>
printRoutes()
router.route(w, sender())
}
def printRoutes(): Unit ={
val size = router.routees.size
println(s"Total routes $size")
}
}
object Main extends App {
var system = ActorSystem.create("foo")
var master = system.actorOf(Props[Master])
master ! "foo"
master ! Remove
master ! "foo"
master ! "bar"
master ! Add
master ! "biz"
}
I am developing an android application using JAVA. All I want is to
record a song and generate its hash(CODE), then query the echoprint server for a match.
If a match is not found, then upload it to the server (ingest) for future references.
I have been able to achieve the first part. Can someone suggest me about the second part in JAVA? (P.S. : I've seen how to do it using python codes - but that won't be helpful in my case.)
Another question, may I achieve the second objective with the global echoprint server? Or, do I need to set up one of my own?
The references I've used are:
http://masl.cis.gvsu.edu/2012/01/25/android-echoprint/
https://github.com/gvsumasl/EchoprintForAndroid
To insert a song into the echoprint server database, all you need to do is call the ingest method. Basically, it is only a HTTP POST request with correct json body. Here is a Scala code (Java would be very similar) that I am using for that:
import EasyJSON.JSON
import EasyJSON.ScalaJSON
import dispatch.Defaults.executor
import dispatch._
class EchoprintAPI {
val API_URL = "http://your.api.server"
def queryURL(code: String) = url(s"$API_URL/query?fp_code=$code")
def query(code: String): scala.concurrent.Future[ScalaJSON] = {
jsonResponse(queryURL(code))
}
def ingest(json: ScalaJSON, trackId: String): scala.concurrent.Future[ScalaJSON] = {
val metadata = json("metadata")
val request = url(s"$API_URL/ingest").POST
.addParameter("fp_code", json("code").toString)
.addParameter("artist", metadata("artist").toString)
.addParameter("release", metadata("release").toString)
.addParameter("track", metadata("title").toString)
.addParameter("codever", metadata("version").toString)
.addParameter("length", metadata("duration").toString)
.addParameter("genre", metadata("genre").toString)
.addParameter("bitrate", metadata("bitrate").toString)
.addParameter("source", metadata("filename").toString)
.addParameter("track_id", trackId)
.addParameter("sample_rate", metadata("sample_rate").toString)
jsonResponse(request)
}
def delete(trackId: String): scala.concurrent.Future[ScalaJSON] = {
jsonResponse(url(s"$API_URL/query?track_id=$trackId").DELETE)
}
protected def jsonResponse(request: dispatch.Req): scala.concurrent.Future[EasyJSON.ScalaJSON] = {
val response = Http(request OK as.String)
for (c <- response) yield JSON.parseJSON(c)
}
}
To generate the fingerprint code, you can use echoprint-codegen command line call or use the Java JNI integration with C lib
I have a WebSocket in my Play application and I want to write a test for it, but I couldn't find any example on how to write such a test. I found a discussion in the play-framework Google group but there has been no activity recently.
So, are there any ideas on how to test WebSocket's in a Java test?
You can retrieve underlying Iteratee,Enumerator and test them directly. This way you don't need to use a browser. You need akka-testkit though, to cope with asynchronous nature of iteratees.
A Scala example:
object WebSocket extends Controller {
def websocket = WebSocket.async[JsValue] { request =>
Future.successful(Iteratee.ignore[JsValue] -> Enumerator.apply[JsValue](Json.obj("type" -> "error")))
}
}
class WebSocketSpec extends PlaySpecification {
"WebSocket" should {
"respond with error packet" in new WithApplication {
val request = FakeRequest()
var message: JsValue = null
val iteratee = Iteratee.foreach[JsValue](chunk => message = chunk)(Akka.system.dispatcher)
Controller.websocket().f(request)(Enumerator.empty[JsValue],iteratee)
TestKit.awaitCond(message == Json.obj("type" -> "error"), 1 second)
}
}
}
I test WebSockets code using Firefox:
https://github.com/schleichardt/stackoverflow-answers/commit/13d5876791ef409e092e4a097f54247d851e17dc#L8R14
For Java it works similar replacing 'HTMLUNIT' with 'FIREFOX': http://www.playframework.com/documentation/2.1.x/JavaFunctionalTest
Chrome provides a plugin to test websocket service.
Edit
So using the plugin (as shown in picture below) you can provide websocket url and the request data and send message to service. And message log shows the message sent from client and also service response.
Assume that you have a websocket library that returns the Future[Itearatee[JsValue, Unit], Enumerator[JsValue]] your controller uses
trait WSLib {
def connect: Future[Itearatee[JsValue, Unit], Enumerator[JsValue]]
}
And you wanna test this library.
Here is a context you can use:
trait WebSocketContext extends WithApplication {
val aSecond = FiniteDuration(1, TimeUnit.SECONDS)
case class Incoming(iteratee: Iteratee[JsValue, Unit]) {
def feed(message: JsValue) = {
iteratee.feed(Input.El(message))
}
def end(wait: Long = 100) = {
Thread.sleep(wait) //wait until all previous fed messages are handled
iteratee.feed(Input.EOF)
}
}
case class OutGoing(enum: Enumerator[JsValue]) {
val messages = enum(Iteratee.fold(List[JsValue]()) {
(l, jsValue) => jsValue :: l
}).flatMap(_.run)
def get: List[JsValue] = {
Await.result(messages, aSecond)
}
}
def wrapConnection(connection: => Future[Iteratee[JsValue, Unit], Enumerator[JsValue]]): (Incoming, OutGoing) = {
val (iteratee, enumerator) = Await.result(conn, aSecond)
(Incoming(iteratee), OutGoing(enumerator))
}
}
Then your tests can be written as
"return all subscribers when asked for info" in new WebSocketContext {
val (incoming, outgoing) = wrapConnection(myWSLib.connect)
incoming.feed(JsObject("message" => "hello"))
incoming.end() //this closes the connection
val responseMessages = outgoing.get //you only call this "get" after the connection is closed
responseMessages.size must equalTo(1)
responseMessages must contain(JsObject("reply" => "Hey"))
}
Incoming represent the messages coming from the client side, while the outgoing represents the messages sent from the server. To write test, you first feed in the incoming messages from incoming and then close the connection by calling incoming.end, then you get the complete list of outgoing messages from the outgoing.get method.