I want to connect two applications via rsocket. One is written in GO and second in Kotlin.
I want to realize connection where client sends Stream of data and server send confirmation response.
The problem is with waiting for all elements, if server do not BlockOnLast(ctx), whole stream is read, but response is send before all entries arrive. If BlockOnLast(ctx) is added, Server (GoLang) is stuck.
I wrote also client in Kotlin, and in that case whole communication is working perfectly fine.
Do enyone may help?
GO Server:
package main
import (
"context"
"github.com/golang/protobuf/proto"
"github.com/rsocket/rsocket-go"
"github.com/rsocket/rsocket-go/payload"
"github.com/rsocket/rsocket-go/rx"
"github.com/rsocket/rsocket-go/rx/flux"
"rsocket-go-rpc-test/proto"
)
func main() {
addr := "tcp://127.0.0.1:8081"
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
err := rsocket.Receive().
Fragment(1024).
Resume().
Acceptor(func(setup payload.SetupPayload, sendingSocket rsocket.CloseableRSocket) (rsocket.RSocket, error) {
return rsocket.NewAbstractSocket(
rsocket.RequestChannel(func(payloads rx.Publisher) flux.Flux {
println("START")
payloads.(flux.Flux).
DoOnNext(func(input payload.Payload) {
chunk := &pl_dwojciechowski_proto.Chunk{}
proto.Unmarshal(input.Data(), chunk)
println(string(chunk.Content))
}).BlockLast(ctx)
return flux.Create(func(i context.Context, sink flux.Sink) {
status, _ := proto.Marshal(&pl_dwojciechowski_proto.UploadStatus{
Message: "OK",
Code: 0,
})
sink.Next(payload.New(status, make([]byte, 1)))
sink.Complete()
println("SENT")
})
}),
), nil
}).
Transport(addr).
Serve(ctx)
panic(err)
}
Kotlin Client:
private fun clientCall() {
val rSocket = RSocketFactory.connect().transport(TcpClientTransport.create(8081)).start().block()
val client = FileServiceClient(rSocket)
val requests: Flux<Chunk> = Flux.range(1, 10)
.map { i: Int -> "sending -> $i" }
.map<Chunk> {
Chunk.newBuilder()
.setContent(ByteString.copyFrom(it.toByteArray())).build()
}
val response = client.send(requests).block() ?: throw Exception("")
rSocket.dispose()
System.out.println(response.message)
}
And equivalent for GO written in Kotlin:
val serviceServer = FileServiceServer(DefaultService(), Optional.empty(), Optional.empty())
val closeableChannel = RSocketFactory.receive()
.acceptor { setup: ConnectionSetupPayload?, sendingSocket: RSocket? ->
Mono.just(
RequestHandlingRSocket(serviceServer)
)
}
.transport(TcpServerTransport.create(8081))
.start()
.block()
closeableChannel.onClose().block()
class DefaultService : FileService {
override fun send(messages: Publisher<Service.Chunk>?, metadata: ByteBuf?): Mono<Service.UploadStatus> {
return Flux.from(messages)
.windowTimeout(10, Duration.ofSeconds(500))
.flatMap(Function.identity())
.doOnNext { println(it.content.toStringUtf8()) }
.then(Mono.just(Service.UploadStatus.newBuilder().setCode(Service.UploadStatusCode.Ok).setMessage("test").build()))
}
}
Server Output:
START
sending -> 1
Solution below:
package main
import (
"context"
"github.com/golang/protobuf/proto"
"github.com/rsocket/rsocket-go"
"github.com/rsocket/rsocket-go/payload"
"github.com/rsocket/rsocket-go/rx"
"github.com/rsocket/rsocket-go/rx/flux"
"rsocket-go-rpc-test/proto"
)
type TestService struct {
totals int
pl_dwojciechowski_proto.FileService
}
var statusOK = &pl_dwojciechowski_proto.UploadStatus{
Message: "code",
Code: pl_dwojciechowski_proto.UploadStatusCode_Ok,
}
var statusErr = &pl_dwojciechowski_proto.UploadStatus{
Message: "code",
Code: pl_dwojciechowski_proto.UploadStatusCode_Failed,
}
func main() {
addr := "tcp://127.0.0.1:8081"
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
err := rsocket.Receive().
Fragment(1024).
Acceptor(func(setup payload.SetupPayload, sendingSocket rsocket.CloseableRSocket) (rsocket.RSocket, error) {
return rsocket.NewAbstractSocket(
rsocket.RequestChannel(func(msgs rx.Publisher) flux.Flux {
dataReceivedChan := make(chan bool, 1)
toChan, _ := flux.Clone(msgs).
DoOnError(func(e error) {
dataReceivedChan <- false
}).
DoOnComplete(func() {
dataReceivedChan <- true
}).
ToChan(ctx, 1)
fluxResponse := flux.Create(func(ctx context.Context, s flux.Sink) {
gluedContent := make([]byte, 1024)
for c := range toChan {
chunk := pl_dwojciechowski_proto.Chunk{}
_ = chunk.XXX_Unmarshal(c.Data())
gluedContent = append(gluedContent, chunk.Content...)
}
if <-dataReceivedChan {
marshal, _ := proto.Marshal(statusOK)
s.Next(payload.New(marshal, nil))
s.Complete()
} else {
marshal, _ := proto.Marshal(statusErr)
s.Next(payload.New(marshal, nil))
s.Complete()
}
})
return fluxResponse
}),
), nil
}).
Transport(addr).
Serve(ctx)
panic(err)
}
Related
I want to store Zk4500 fingerprint template in k50 biometric attendance machine . I am using java for zk4000 scanner and python for communicating with attendance machine(k 50).
public void onAccept(){
if (count == 1){
image1.setImage(imageView.getImage());
fingerPrintTemplateForDB1 = FingerprintSensorEx.BlobToBase64(template, templateLen[0]);
System.out.println(fingerPrintTemplateForDB1);
++count;
}else
if (count == 2){
image2.setImage(imageView.getImage());
fingerPrintTemplateForDB2 = FingerprintSensorEx.BlobToBase64(template, templateLen[0]);
count = 1;
}
}
I am trying to store this template in
fingerPrintTemplateForDB1 = FingerprintSensorEx.BlobToBase64(template, templateLen[0]);
from zk import ZK, const
from zk.finger import Finger
conn = None
zk = ZK('192.168.10.201', port=4370, timeout=5, password=0, force_udp=False, ommit_ping=False)
conn = zk.connect()
conn.disable_device()
conn.set_user(uid=6, name='ahmed f', privilege=const.USER_ADMIN, password='12345678', group_id='', user_id='6', card=0)
fingerPrintTemplateForDB1 = "Template from Zk45000"
Myfinger = {
"uid": 6,
"fid": 6,
"valid": 1,
'template': fingerPrintTemplateForDB1
}
users = conn.get_users()
for user in users:
if user.user_id == "3":
conn.save_user_template(user, [ Finger.json_unpack(Myfinger)])
I think the main issue is in Finger print object In python :
from struct import pack #, unpack
import codecs
class Finger(object):
def __init__(self, uid, fid, valid, template):
self.size = len(template) # template only
self.uid = int(uid)
self.fid = int(fid)
self.valid = int(valid)
self.template = template
#self.mark = str().encode("hex")
self.mark = codecs.encode(template[:8], 'hex') + b'...' + codecs.encode(template[-8:], 'hex')
def repack(self): #full
return pack("HHbb%is" % (self.size), self.size+6, self.uid, self.fid, self.valid, self.template)
def repack_only(self): #only template
return pack("H%is" % (self.size), self.size, self.template)
#staticmethod
def json_unpack(json):
return Finger(
uid=json['uid'],
fid=json['fid'],
valid=json['valid'],
template=codecs.decode(json['template'],'hex')
)
def json_pack(self): #packs for json
return {
"size": self.size,
"uid": self.uid,
"fid": self.fid,
"valid": self.valid,
"template": codecs.encode(self.template, 'hex').decode('ascii')
}
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __str__(self):
return "<Finger> [uid:{:>3}, fid:{}, size:{:>4} v:{} t:{}]".format(self.uid, self.fid, self.size, self.valid, self.mark)
def __repr__(self):
return "<Finger> [uid:{:>3}, fid:{}, size:{:>4} v:{} t:{}]".format(self.uid, self.fid, self.size, self.valid, self.mark)
def dump(self):
return "<Finger> [uid:{:>3}, fid:{}, size:{:>4} v:{} t:{}]".format(self.uid, self.fid, self.size, self.valid, codecs.encode(self.template, 'hex'))
But nothing is working Can anyone have an idea .
I am using this unofficial library
pip install -U pyzk
i found a solution the main problem was k50 was using ZkFingerprint10.0 algorithm and zk4500 was using algorithm ZkFingerprint9.0 by updating drivers of zk4500 from website https://www.zkteco.com/en/product_detail/ZKFinger-SDK-Windows.html
(not from CD) to latest version....problem was solved
I have an issue where I read a bytestream from a big file ~ (100MB) and after some integers I get the value 0 (but only with sbt run ). When I hit the play button on IntelliJ I get the value I expected > 0.
My guess was that the environment is somehow different. But I could not spot the difference.
// DemoApp.scala
import java.nio.{ByteBuffer, ByteOrder}
object DemoApp extends App {
val inputStream = getClass.getResourceAsStream("/HandRanks.dat")
val handRanks = new Array[Byte](inputStream.available)
inputStream.read(handRanks)
inputStream.close()
def evalCard(value: Int) = {
val offset = value * 4
println("value: " + value)
println("offset: " + offset)
ByteBuffer.wrap(handRanks, offset, handRanks.length - offset).order(ByteOrder.LITTLE_ENDIAN).getInt
}
val cards: List[Int] = List(51, 45, 14, 2, 12, 28, 46)
def eval(cards: List[Int]): Unit = {
var p = 53
cards.foreach(card => {
println("p = " + evalCard(p))
p = evalCard(p + card)
})
println("result p: " + p);
}
eval(cards)
}
The HandRanks.dat can be found here: (I put it inside a directory called resources)
https://github.com/Robert-Nickel/scala-texas-holdem/blob/master/src/main/resources/HandRanks.dat
build.sbt is:
name := "LoadInts"
version := "0.1"
scalaVersion := "2.13.4"
On my windows machine I use sbt 1.4.6 with Oracle Java 11
You will see that the evalCard call will work 4 times but after the fifth time the return value is 0. It should be higher than 0, which it is when using IntelliJ's play button.
You are not reading a whole content. This
val handRanks = new Array[Byte](inputStream.available)
allocates only as much as InputStream buffer and then you read the amount in buffer with
inputStream.read(handRanks)
Depending of defaults you will process different amount but they will never be 100MB of data. For that you would have to read data into some structure in the loop (bad idea) or process it in chunks (with iterators, stream, etc).
import scala.util.Using
// Using will close the resource whether error happens or not
Using(getClass.getResourceAsStream("/HandRanks.dat")) { inputStream =>
def readChunk(): Option[Array[Byte]] = {
// can be done better, but that's not the point here
val buffer = new Array[Byte](inputStream.available)
val bytesRead = inputStream.read(buffer)
if (bytesRead >= 0) Some(buffer.take(bytesRead))
else None
}
#tailrec def process(): Unit = {
readChunk() match {
case Some(chunk) =>
// do something
process()
case None =>
// nothing to do - EOF reached
}
}
process()
}
I am developing an Swift app. I'm using sockets for the background connection. But know I get this error, when I try to use it:
libsystem_kernel.dylib`__pthread_kill:
0x11329edd0 <+0>: movl $0x2000148, %eax ; imm = 0x2000148
0x11329edd5 <+5>: movq %rcx, %r10
0x11329edd8 <+8>: syscall -> 0x11329edda <+10>: jae 0x11329ede4 ; <+20>
0x11329eddc <+12>: movq %rax, %rdi
0x11329eddf <+15>: jmp 0x113297d6f ; cerror_nocancel
0x11329ede4 <+20>: retq
0x11329ede5 <+21>: nop
0x11329ede6 <+22>: nop
0x11329ede7 <+23>: nop
It also shows "Thread: Signal Sigabrt"
.Here I call the method:
let x:ComObwareAlifstoPostConnection = ComObwareAlifstoPostConnection()
DispatchQueue.global(qos: .background).async {
x.connect()
let y = x.getPostsWith("username", with: "password", with: "15000000000000", with: "down", with: "0")
DispatchQueue.main.async {
}
}
And here is my Objective-C class (It's Java converted by J2Objc)
- (IOSObjectArray *)getPostsWithNSString:(NSString *)username
withNSString:(NSString *)password
withNSString:(NSString *)time
withNSString:(NSString *)direction
withNSString:(NSString *)minTime {
IOSObjectArray *returnArray = nil;
#try {
[self connect];
if (ComObwareAlifstoPostConnection_socket == nil) {
return nil;
}
JavaIoDataOutputStream *os = new_JavaIoDataOutputStream_initWithJavaIoOutputStream_([ComObwareAlifstoPostConnection_socket getOutputStream]);
[os writeUTFWithNSString:JreStrcat("C$$$$$$$$$$", '2', ComObwareAlifstoPostConnection_SPLITTED, username, ComObwareAlifstoPostConnection_SPLITTED, password, ComObwareAlifstoPostConnection_SPLITTED, time, ComObwareAlifstoPostConnection_SPLITTED, direction, ComObwareAlifstoPostConnection_SPLITTED, minTime)];
[os flush];
JavaIoObjectInputStream *in = new_JavaIoObjectInputStream_initWithJavaIoInputStream_([((JavaNetSocket *) nil_chk(ComObwareAlifstoPostConnection_socket)) getInputStream]);
while ((returnArray = (IOSObjectArray *) cast_check([in readObject], IOSClass_arrayType(ComObwareAlifstoPostPost_class_(), 1))) != nil) {
return returnArray;
}
}
#catch (JavaNetUnknownHostException *e) {
}
#catch (JavaIoIOException *e) {
if (!hastried_) {
hastried_ = true;
return [self getPostsWithNSString:username withNSString:password withNSString:time withNSString:direction withNSString:minTime];
}
[((JavaIoIOException *) nil_chk(e)) printStackTrace];
}
#catch (JavaLangClassNotFoundException *e) {
}
return nil;
}
So, where could be the error? With some other method it works.
Build with debugging enabled, either in Xcode or, if using the command-line, include the C compiler (clang) "-g" flag. Then run it in Xcode's debugger (or from the command-line using lldb). When your app aborts, it should break at the pkill code you listed, but you can move up the stack frame ("up" command) to see which statement
I am sending some JSON data from Java server via TCP to Logstash (Logstash sends them to Elasticsearch) and these JSON data seems to be escaped in Elastic.
Java serialization:
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("age", event.getAge());
for (Entry<String, Serializable> attribute : event.getAttributes().entrySet()) {
jsonMap.put("attribute_" + attribute.getKey(), attribute.getValue());
}
jsonMap.put("message", event.getMessage());
jsonMap.put("cause", event.getCause());
jsonMap.put("timestamp", event.getTimestamp());
jsonMap.put("eventid", event.getEventId());
jsonMap.put("instanceid", event.getInstanceId());
jsonMap.put("origin", event.getOrigin());
jsonMap.put("severity", event.getSeverity());
jsonMap.put("durability", event.getDurability());
jsonMap.put("detail", event.getDetail());
int i = 0;
for (String tag : event.getTags()) {
jsonMap.put("tag_" + String.valueOf(i), tag);
i++;
}
return new JSONObject(jsonMap).toString();
Java Socket:
try (Socket clientSocket = new Socket(url, port);
OutputStreamWriter out = new OutputStreamWriter(
clientSocket.getOutputStream(), "UTF-8")) {
out.write(content.toString());
out.flush();
}
Example data in Elastic:
"message": "{\"detail\":null,\"cause\":null,\"attribute_start\":\"Mon Jan 11 16:15:28 CET 2016\",\"durability\":\"MOMENTARY\",\"attribute_login\":\"\",\"origin\":\"fortuna.ws.navipro\",\"severity\":\"ERROR\",\"attribute_type\":null,\"attribute_methodName\":\"Logout\",\"eventid\":\"ws.navipro.call\",\"attribute_call\":\"[57,7256538816272415441,,OK]{0 connections} CZ() Calling method 'Logout' at 1452525329029(Mon Jan 11 16:15:28 CET 2016-Mon Jan 11 16:15:29 CET 2016; roundtrip=36ms):\\n\\tRequest\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ClientLogoutRequest\\n\\t\\tkeep: true\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.RequestBody\\n\\t\\tcountry: CZ\\n\\t\\tsessionCC: NULL\\n\\t\\tsessionID:\\n\\t\\tsessionIP:\\n\\t\\tdebug: NULL\\n\\t\\tns: NULL\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.RequestCorpus\\n\\tResponse\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ClientLogoutResponse\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ResponseBody\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ResponseCorpus\\n\\t\\tmessage: \\n\\t\\t[1] \\n\\t\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.Message\\n\\t\\t\\tparam: \\n\\t\\t\\t[1] \\n\\t\\t\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.Message$Param\\n\\t\\t\\t\\tindex: 0\\n\\t\\t\\t\\ttype: NULL\\n\\t\\t\\t\\tvalue: 3\\n\\t\\t\\tid: 104\\n\\t\\t\\tseverity: NOTIFICATION\\n\\t\\t\\tlink: NULL\\n\\t\\tentryLink: NULL\\n\\t\\thint: NULL\\n\\t\\thintType: NULL\\n\\t\\tstatus: OK\\nEND\",\"timestamp\":1452525329030,\"message\":\"NaviPro method Logoutcalled.\",\"tag_1\":\"NaviPro\",\"attribute_end\":\"Mon Jan 11 16:15:29 CET 2016\",\"attribute_sessionId\":\"\",\"age\":0,\"tag_0\":\"Logout\",\"instanceid\":\"Logout\",\"attribute_address\":\""}"
Logstash config:
input {
syslog {
port => 1514
}
tcp {
port => 3333
}
}
filter {
if [type] == "docker" {
json {
source => "message"
}
mutate {
rename => [ "log", "message" ]
}
date {
match => [ "time", "ISO8601" ]
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I want to have data in Elastic as JSON so I can filter fields in Kibana.
EDIT:
If I try to change configuration to this:
input {
tcp {
port => 3333
codec => json
}
}
Logstash refuses to launch with this line in log:
logstash_1 | {:timestamp=>"2016-01-13T10:13:58.583000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
Problem was in incorrect Logstash configuration. Logstash was running but not sending anything through filter. This is correct configuration:
input {
tcp {
port => 3333
type => "java"
}
}
filter {
if [type] == "java" {
json {
source => "message"
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I created a simple test and all you should need to add to your logstash configuration is codec => json. The default value is "line" and will escape the characters in the string.
input {
tcp {
port => 3333
codec => json
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I have the following configuration in application.conf:
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 3s
}
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = INFO
daemonic = on
}
This is the way how I configured my actor
public class MyTestActor extends UntypedActor implements RequiresMessageQueue<BoundedMessageQueueSemantics>{
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String){
Thread.sleep(500);
System.out.println("message = " + message);
}
else {
System.out.println("Unknown Message " );
}
}
}
Now this is how I initate this actor:
myTestActor = myActorSystem.actorOf(Props.create(MyTestActor.class).withMailbox("bounded-mailbox"), "simple-actor");
After it, in my code I'm sending 3000 messages to this actor.
for (int i =0;i<3000;i++){
myTestActor.tell(guestName, null);}
What I expect to see is the exception that my Queues are full, but my messages are printed inside onReceive method every half second, like nothing happened. So I believe my mailbox configuration is not applied.
What am I doing wrong?
Updated: I created actor which subscribes to dead letter events:
deadLetterActor = myActorSystem.actorOf(Props.create(DeadLetterMonitor.class),"deadLetter-monitor");
and installed Kamon for queues monitoring:
After sending 3000 messages sent to the actor, Kamin shows me the following:
Actor: user/simple-actor
MailBox size:
Min: 100
Avg.: 100.0
Max: 101
Actor: system/deadLetterListener
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
Actor: system/deadLetter-monitor
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
By default Akka discards overflowing messages into DeadLetters and actor doesn't stop processing:
https://github.com/akka/akka/blob/876b8045a1fdb9cdd880eeab8b1611aa976576f6/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala#L411
But sending thread will be blocked on interval which is configured by mailbox-push-timeout-time before discarding the message. Try to decrease it to 1ms and see that following test will pass:
import java.util.concurrent.atomic.AtomicInteger
import akka.actor._
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory._
import org.specs2.mutable.Specification
class BoundedActorSpec extends Specification {
args(sequential = true)
def config: Config = load(parseString(
"""
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 1ms
}
"""))
val system = ActorSystem("system", config)
"some messages should go to dead letters" in {
system.eventStream.subscribe(system.actorOf(Props(classOf[DeadLetterMetricsActor])), classOf[DeadLetter])
val myTestActor = system.actorOf(Props(classOf[MyTestActor]).withMailbox("bounded-mailbox"))
for (i <- 0 until 3000) {
myTestActor.tell("guestName", null)
}
Thread.sleep(100)
system.shutdown()
system.awaitTermination()
DeadLetterMetricsActor.deadLetterCount.get must be greaterThan(0)
}
}
class MyTestActor extends Actor {
def receive = {
case message: String =>
Thread.sleep(500)
println("message = " + message);
case _ => println("Unknown Message")
}
}
object DeadLetterMetricsActor {
val deadLetterCount = new AtomicInteger
}
class DeadLetterMetricsActor extends Actor {
def receive = {
case _: DeadLetter => DeadLetterMetricsActor.deadLetterCount.incrementAndGet()
}
}