ZMQ missing events being propagated in jeromq scala - java

I am new to ZeroMQ and seem to be losing messages in a loop in my begin() method.
I'm wondering if I am missing a piece where I am not queuing messages or something?
When I cause an event on my publisher, that sends two messages to my subscriber with a small gap in between, I seem not to be getting the second message that is relayed. What am I missing?
class ZMQSubscriber[T <: Transaction, B <: Block](
socket: InetSocketAddress,
hashTxListener: Option[HashDigest => Future[Unit]],
hashBlockListener: Option[HashDigest => Future[Unit]],
rawTxListener: Option[Transaction => Future[Unit]],
rawBlockListener: Option[Block => Future[Unit]]) {
private val logger = BitcoinSLogger.logger
def begin()(implicit ec: ExecutionContext) = {
val context = ZMQ.context(1)
// First, connect our subscriber socket
val subscriber = context.socket(ZMQ.SUB)
val uri = socket.getHostString + ":" + socket.getPort
//subscribe to the appropriate feed
hashTxListener.map { _ =>
subscriber.subscribe(HashTx.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to the transaction hashes from zmq")
}
rawTxListener.map { _ =>
subscriber.subscribe(RawTx.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to raw transactions from zmq")
}
hashBlockListener.map { _ =>
subscriber.subscribe(HashBlock.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to the hashblock stream from zmq")
}
rawBlockListener.map { _ =>
subscriber.subscribe(RawBlock.topic.getBytes(ZMQ.CHARSET))
logger.debug("subscribed to raw block")
}
subscriber.connect(uri)
subscriber.setRcvHWM(0)
logger.info("Connection to zmq client successful")
while (true) {
val notificationTypeStr = subscriber.recvStr(ZMQ.DONTWAIT)
val body = subscriber.recv(ZMQ.DONTWAIT)
Future(processMsg(notificationTypeStr, body))
}
}
private def processMsg(topic: String, body: Seq[Byte])(implicit ec: ExecutionContext): Future[Unit] = Future {
val notification = ZMQNotification.fromString(topic)
val res: Option[Future[Unit]] = notification.flatMap {
case HashTx =>
hashTxListener.map { f =>
val hash = Future(DoubleSha256Digest.fromBytes(body))
hash.flatMap(f(_))
}
case RawTx =>
rawTxListener.map { f =>
val tx = Future(Transaction.fromBytes(body))
tx.flatMap(f(_))
}
case HashBlock =>
hashBlockListener.map { f =>
val hash = Future(DoubleSha256Digest.fromBytes(body))
hash.flatMap(f(_))
}
case RawBlock =>
rawBlockListener.map { f =>
val block = Future(Block.fromBytes(body))
block.flatMap(f(_))
}
}
}
}

So this seems to have been solved by using a ZMsg.recvMsg() in the while-loop instead of
val notificationTypeStr = subscriber.recvStr(ZMQ.DONTWAIT)
val body = subscriber.recv(ZMQ.DONTWAIT)
I'm not sure why this works, but it does. So here is what my begin method looks like now
while (run) {
val zmsg = ZMsg.recvMsg(subscriber)
val notificationTypeStr = zmsg.pop().getString(ZMQ.CHARSET)
val body = zmsg.pop().getData
Future(processMsg(notificationTypeStr, body))
}
Future.successful(Unit)
}

What am I missing?
How the blocking v/s non-blocking modus operandi work :
The trick is in the (non-)blocking mode of the respective call to the .recv() method.
A second call to the subscriber.recv( ZMQ.DONTWAIT )-method thus returns immediately, so your second part, ( the body ) may and will legally contain nothing, even though your promise stated a pair of messages was indeed dispached from the publisher-side ( a pair of .send() method calls - one may also object, there are chances the sender was actually sending just one message, in a multi-part fashion - MCVE-code is not specific on this part ).
So, once you have moved your code from non-blocking mode ( in the O/P ) into a principally blocking-mode ( which locked / sync-ed the further flow of the code with the external event of an arrival of any plausibly formatted message, not returning earlier ), in:
val zmsg = ZMsg.recvMsg(subscriber) // which BLOCKS-till-a-1st-zmsg-arrived
both the further processed .pop()-ed parts just unload the components ( ref. the remark on actual ZMsg multi-part structure actually sent by the published-side, presented above )
Safety next :unlimited alloc-s v/s a mandatory blocking / dropping messages ?
the code surprised me on several points. Besides a rather very "late" call to the .connect()-method, compared to all the previous socket-archetype detailed settings ( that normally get arranged "after" a request to setup a connection ). While this may work fine, as intended, yet it exposes even tighter ( smaller ) time-window for the .Context()-instance to setup and (re-)negotiate all the relevant connection-details so as to become RTO.
One particular line attracted my attention: subscriber.setRcvHWM( 0 ) this is a version-archetype dependent trick. Yet, the value of zero causes an application to become vulnerable and I would not advise doing so in any production-grade application.

Related

Translate Kotlin to Java BLE callback

Im trying to translate this part of a code to Java, i understand its waiting for a callback from a BLE action, if returns any data will be added to statusReport, only issue is that it seems that is doing a loop somewhere because in the real application gets this callback multiple times but on my side i only get one response.
private val statusReportCallback = DataReceivedCallback { _, data -> convertByteArrayToASCII(data.value)?.let { Endpoint.addToStatusReport(it)
if (nextWillBeICCID) iccid = it.also { notifyDeviceEvent(EventType.ICCID) }.also { nextWillBeICCID = false }
when (it) {
"power off" -> Endpoint.commsTestInProgress.set(false)
"ICCID:" -> nextWillBeICCID = true
"POST: Success" -> Endpoint.onServerConnected.value = Event(true)
}
} }

Remove a route from a routee and get poison pill?

I have an actor who is routing messages to a group of other actors who act as wrappers on top of a volatile service. So far everythign is great, but I'd like to be able to control how many actors exist acting on this service (since they may represent socket connections or other physical properties) so beign able to manage scaling them would be nice.
I see that there is a remove routee method on the router and it does remove the routes, but is there a way to send a poison pill to my child actors first before they are removed? The docs say that a poison pill message should come through when removing the routee this way but I'm not seeing that happen.
I have code like this
final Collection<Routee> routees = JavaConversions.asJavaCollection(router.routees());
for (final Routee routee : routees.stream()
.limit(numberToRemove)
.collect(toList())) {
router = router.removeRoutee(routee);
}
So looks like I was missing the fact that I had to send a poison pill manually in order to stop my router. Here is a full scala demo app
import akka.actor._
import akka.routing._
case class Add()
case class Remove()
class Worker(id: Integer) extends UntypedActor {
println(s"Made worker $id")
#throws[Exception](classOf[Exception]) override
def preStart(): Unit = {
println(s"Starting $id")
}
#throws[Exception](classOf[Exception]) override
def postStop(): Unit = {
println(s"Stopping $id")
}
#throws[Exception](classOf[Exception])
override def onReceive(message: Any): Unit = message match {
case _ => println(s"Message received on actor $id")
}
}
class Master extends Actor {
var count = 0
def makeWorker() = {
val id = count
count = count + 1
context.actorOf(Props(new Worker(id)))
}
var activeWorkers = Seq.fill(2) {
makeWorker()
}
var router = Router(RoundRobinRoutingLogic(), activeWorkers.map(r => {
context watch r
ActorRefRoutee(r)
}).toIndexedSeq)
def receive = {
case Remove =>
println("Removing route")
val head = router.routees.head.asInstanceOf[ActorRefRoutee].ref
head ! PoisonPill
context unwatch head
router = router.removeRoutee(head)
printRoutes()
case Add =>
println("Adding route")
val worker = makeWorker()
context watch worker
router = router.addRoutee(worker)
printRoutes()
case w: AnyRef =>
printRoutes()
router.route(w, sender())
}
def printRoutes(): Unit ={
val size = router.routees.size
println(s"Total routes $size")
}
}
object Main extends App {
var system = ActorSystem.create("foo")
var master = system.actorOf(Props[Master])
master ! "foo"
master ! Remove
master ! "foo"
master ! "bar"
master ! Add
master ! "biz"
}

Scala opening write to stdout or file

Let's say I have a function
writeToFileOrStdout(fname: String = Nil) = { ... }
If the user passes a string value for fname, then I'd like to open a file with that name and write to it; otherwise, I'd like to print to stdout. I could always just write an if statement to take care of this, but how would I write a case statement on fname and open the correct corresponding outputStream?
val outStream = fname match {
case Nil => ???
case _ => new java.io.FileOutputStream(new java.io.File(fname))
}
outStream.write( ... )
Thanks!
Why not rewrite the function as:
def writeToFileOrStdout(fname: Option[String] = None) = {
val outStream = fname match{
case Some(name) => new java.io.FileOutputStream(new java.io.File(name))
case None => System.out
}
...
}
It's always a good idea to use Option for an optional input as opposed to using null. That's basically what it's there for. In good scala code, you will not see explicit references to null.
In fact, your code doesn't even compile for me. Nil is used to represent an empty list, not a null or non supplied String.
To augment cmbaxter's response...
Mapping a String with a possible null value to Option[String] is trivial: Option(stringValue) will return None where stringValue is null, and Some(stringValue) where non-null.
Thus, you can either:
writeToFileOrStdout(Option(stringValue)), or
If you're stuck on String (and possibly a null value) as the parameter to writeToFileOrStdout, then internally use Option(fname) and match to what it returns::
def writeToFileOrStdout(fname: String = null) = {
val outStream = Option(fname) match{
case Some(name) => new java.io.FileOutputStream(new java.io.File(name))
case None => System.out
}
...
}
To further augment cmbaxter's response, you might consider writing this:
def filenameToOutputStream(name: String) =
new java.io.FileOutputStream(new java.io.File(name))
def writeToFileOrStdout(fname: Option[String] = None) = {
val outStream = fname map filenameToOutputStream getOrElse System.out
...
}
As the post Idiomatic Scala: Your Options Do Not Match suggests, this might be more idiomatic Scala.

Test WebSocket in PlayFramework

I have a WebSocket in my Play application and I want to write a test for it, but I couldn't find any example on how to write such a test. I found a discussion in the play-framework Google group but there has been no activity recently.
So, are there any ideas on how to test WebSocket's in a Java test?
You can retrieve underlying Iteratee,Enumerator and test them directly. This way you don't need to use a browser. You need akka-testkit though, to cope with asynchronous nature of iteratees.
A Scala example:
object WebSocket extends Controller {
def websocket = WebSocket.async[JsValue] { request =>
Future.successful(Iteratee.ignore[JsValue] -> Enumerator.apply[JsValue](Json.obj("type" -> "error")))
}
}
class WebSocketSpec extends PlaySpecification {
"WebSocket" should {
"respond with error packet" in new WithApplication {
val request = FakeRequest()
var message: JsValue = null
val iteratee = Iteratee.foreach[JsValue](chunk => message = chunk)(Akka.system.dispatcher)
Controller.websocket().f(request)(Enumerator.empty[JsValue],iteratee)
TestKit.awaitCond(message == Json.obj("type" -> "error"), 1 second)
}
}
}
I test WebSockets code using Firefox:
https://github.com/schleichardt/stackoverflow-answers/commit/13d5876791ef409e092e4a097f54247d851e17dc#L8R14
For Java it works similar replacing 'HTMLUNIT' with 'FIREFOX': http://www.playframework.com/documentation/2.1.x/JavaFunctionalTest
Chrome provides a plugin to test websocket service.
Edit
So using the plugin (as shown in picture below) you can provide websocket url and the request data and send message to service. And message log shows the message sent from client and also service response.
Assume that you have a websocket library that returns the Future[Itearatee[JsValue, Unit], Enumerator[JsValue]] your controller uses
trait WSLib {
def connect: Future[Itearatee[JsValue, Unit], Enumerator[JsValue]]
}
And you wanna test this library.
Here is a context you can use:
trait WebSocketContext extends WithApplication {
val aSecond = FiniteDuration(1, TimeUnit.SECONDS)
case class Incoming(iteratee: Iteratee[JsValue, Unit]) {
def feed(message: JsValue) = {
iteratee.feed(Input.El(message))
}
def end(wait: Long = 100) = {
Thread.sleep(wait) //wait until all previous fed messages are handled
iteratee.feed(Input.EOF)
}
}
case class OutGoing(enum: Enumerator[JsValue]) {
val messages = enum(Iteratee.fold(List[JsValue]()) {
(l, jsValue) => jsValue :: l
}).flatMap(_.run)
def get: List[JsValue] = {
Await.result(messages, aSecond)
}
}
def wrapConnection(connection: => Future[Iteratee[JsValue, Unit], Enumerator[JsValue]]): (Incoming, OutGoing) = {
val (iteratee, enumerator) = Await.result(conn, aSecond)
(Incoming(iteratee), OutGoing(enumerator))
}
}
Then your tests can be written as
"return all subscribers when asked for info" in new WebSocketContext {
val (incoming, outgoing) = wrapConnection(myWSLib.connect)
incoming.feed(JsObject("message" => "hello"))
incoming.end() //this closes the connection
val responseMessages = outgoing.get //you only call this "get" after the connection is closed
responseMessages.size must equalTo(1)
responseMessages must contain(JsObject("reply" => "Hey"))
}
Incoming represent the messages coming from the client side, while the outgoing represents the messages sent from the server. To write test, you first feed in the incoming messages from incoming and then close the connection by calling incoming.end, then you get the complete list of outgoing messages from the outgoing.get method.

jruby: java class accepts arguments in IRB but returns error when called from a class

I am using a class that accepts overridden methods i meant optional argument signatures (not sure if that matters in this case, but maybe)
when I call this from IRB it is working as expected, eg, it accepts the arguments
(filtering namespaces and passwords with [filtered] where needed to keep secret stuff secret and my company happy)
jruby-1.5.0 > require 'java'
=> true
jruby-1.5.0 > Dir.glob('lib/java/*.jar').each{|jar| require jar}
=> ["lib/java/[filtered].jar", "lib/java/[filtered].jar", "lib/java/[filtered].jar"]
jruby-1.5.0 > import "[filtered].His351n1"
=> Java::[filtered]::His351n1
jruby-1.5.0 > broker = [filtered].Broker.new('[filtered]', '[filtered]')
=> #<Java::[filtered]::Broker:0x4c4936f3>
jruby-1.5.0 > rpc = "[filtered]"
=> "[filtered]"
jruby-1.5.0 > his = His351n1.new(broker, rpc)
=> #<Java::[filtered]::His351n1:0x7fb6a1c4>
and here is my spec and matching code
before(:each) do
#base = Legacy::Base.new
end
it "should create a valid his351n1 object" do
his = #base.create_his351n1
puts his.inpsect
end
# from within Legacy::Base
def create_his351n1
his = His351n1.new(build_broker, rpc)
end
and finally, the error which fails on the call to His351n1.new
1)
ArgumentError in 'Legacy::Base should create a valid his351n1 object'
wrong # of arguments(2 for 0)
To complicate things, on the irb, this is also apparently valid:
jruby-1.5.0 > his = His351n1.new
=> #<Java::[filtered]::His351n1:0x5ad3c69c>
Also, here are the overridden java methods
public His351n1() {
super();
}
public His351n1(Broker broker) {
this(broker, DEFAULT_SERVER);
}
public His351n1(BrokerService bs) {
this(bs.getBroker(), bs.toString());
}
public His351n1(Broker broker, String serverAddr) {
super(broker, serverAddr, "string", true);
}
public His351n1(final Broker broker, final String serverAddr, final String library)
{
super(broker, serverAddr, library, true);
}
it seems that you have to require the whole namespace in the instantiation of the object aka:
his = Java::[filtered_namesapce]::His351n1.new(build_broker, rpc)

Categories

Resources