I'm trying to send several queries at once to a device (car's ECU, specifically the ELM327) via RxTx. Here's the class that utilize RxTx:
import collection.JavaConversions._
import gnu.io._
import java.io._
import java.util.TooManyListenersException
class Serial private (portName: String,
baudRate: Int,
dataBits: Int,
stopBits: Int,
parity: Int,
flowControl: Int) {
private val portId = CommPortIdentifier.getPortIdentifier(portName)
private val serial = portId.open("Serial Connection from OBDScan",
5000).asInstanceOf[SerialPort]
setPortParameters(baudRate, dataBits, stopBits, parity, flowControl)
private val istream = serial.getInputStream
private val ostream = serial.getOutputStream
def this(portName: String, baudRate: Int = 115200) = this(portName, baudRate,
SerialPort.DATABITS_8,
SerialPort.STOPBITS_1,
SerialPort.PARITY_NONE,
SerialPort.FLOWCONTROL_NONE)
def close = {
try {
istream.close
ostream.close
} catch {
case ioe: IOException => // don't care, it's ended already
}
serial.close
}
def query(command: String) = {
ostream.write(command.getBytes)
ostream.write("\r\n".getBytes) // neccessary for Serial port.
}
def queryResult: String = {
try {
val availableBytes = istream.available
val buffer = new Array[Byte](availableBytes)
if (availableBytes > 0) {
istream.read(buffer, 0, availableBytes)
}
new String(buffer, 0, availableBytes)
} catch {
case ioe: IOException => "Something wrong! Please try again."
}
}
def addListener(listener: SerialPortEventListener) = {
try {
serial.addEventListener(listener)
serial.notifyOnDataAvailable(true)
} catch {
case tm: TooManyListenersException => println("Too many listener")
}
}
def removeListener = {
serial.removeEventListener
}
private def setPortParameters(br: Int,
db: Int,
sb: Int,
p: Int,
fc: Int) = {
serial.setSerialPortParams(baudRate, dataBits, stopBits, parity)
serial.setFlowControlMode(flowControl)
}
}
object Serial {
def connect(portName: String, baudRate: Int = 115200): Serial = {
try {
val ret = new Serial(portName, baudRate)
ret
} catch {
// exception handling omitted.
}
}
}
Now querying works fine and gives me the correct result. The problems comes when I sent several queries at once:
val serial = Serial.connect("COM34")
serial.query("AT Z")
serial.query("10 00")
The device recieved both of the queries, but returns only one result. If I want to get the next result, I have to send another query, which will results in the program get late by one query. If I call Thread.sleep after each query:
val serial = Serial.connect("COM34")
serial.query("AT Z")
Thread.sleep(500)
serial.query("10 00")
The problem's solved, but of course the entire application stops when Thread.sleep is called. I don't want this since the application will do this query all the time (which means it will hangs all the time if I do this).
Since I'm on Scala, I'm thinking on using Actors or something similar, but I think it's an overkill for a desktop application. Is there any way to do this without Actors? Maybe I'm reading the response from serial port wrong?
TL;DR: I want to do several queries to serial device via RxTx without locking the whole application (current solution achieved via Thread.sleep, which blocks the whole application). How can I do that?
Related
I'm an Akka beginner. (I am using Java)
I'm making a file transfer system using Akka.
Currently, I have completed sending the Actor1(Local) -> Actor2(Remote) file.
Now,
When I have a problem transferring files, I'm thinking about how to solve it.
Then I had a question. The questions are as follows.
If I lost my network connection while I was transferring files, the file transfer failed (90 percent complete).
I will recover my network connection a few minutes later.
Is it possible to transfer the rest of the file data? (10% Remaining)
If that's possible, Please give me some advice.
here is my simple code.
thanks :)
Actor1 (Local)
private Behavior<Event> onTick() {
....
String fileName = "test.zip";
Source<ByteString, CompletionStage<IOResult>> logs = FileIO.fromPath(Paths.get(fileName));
logs.runForeach(f -> originalSize += f.size(), mat).thenRun(() -> System.out.println("originalSize : " + originalSize));
SourceRef<ByteString> logsRef = logs.runWith(StreamRefs.sourceRef(), mat);
getContext().ask(
Receiver.FileTransfered.class,
selectedReceiver,
timeout,
responseRef -> new Receiver.TransferFile(logsRef, responseRef, fileName),
(response, failure) -> {
if (response != null) {
return new TransferCompleted(fileName, response.transferedSize);
} else {
return new JobFailed("Processing timed out", fileName);
}
}
);
}
Actor2 (Remote)
public static Behavior<Command> create() {
return Behaviors.setup(context -> {
...
Materializer mat = Materializer.createMaterializer(context);
return Behaviors.receive(Command.class)
.onMessage(TransferFile.class, command -> {
command.sourceRef.getSource().runWith(FileIO.toPath(Paths.get("test.zip")), mat);
command.replyTo.tell(new FileTransfered("filename", 1024));
return Behaviors.same();
}).build();
});
}
You need to think about following for a proper implementation of file transfer with fault tolerance:
How to identify that a transfer has to be resumed for a given file.
How to find the point from which to resume the transfer.
Following implementation makes very simple assumptions about 1 and 2.
The file name is unique and thus can be used for such identification. Strictly speaking, this is not true, for example you can transfer files with the same name from different folders. Or from different nodes, etc. You will have to readjust this based on your use case.
It is assumed that the last/all writes on the receiver side wrote all bytes correctly and total number of written bytes indicate the point to resume the transfer. If this cannot be guaranteed, you need to logically split the original file into chunks and transfer hashes of each chunk, its size and position to the receiver, which has to validate chunks on its side and find correct pointer for resuming the transfer.
(That's a bit more than 2 :) ) This implementation ignores identification of transfer problem and focuses on 1 and 2 instead.
The code:
object Sender {
sealed trait Command
case class Upload(file: String) extends Command
case class StartWithIndex(file: String, index: Long) extends Sender.Command
def behavior(receiver: ActorRef[Receiver.Command]): Behavior[Sender.Command] = Behaviors.setup[Sender.Command] { ctx =>
implicit val materializer: Materializer = SystemMaterializer(ctx.system).materializer
Behaviors.receiveMessage {
case Upload(file) =>
receiver.tell(Receiver.InitUpload(file, ctx.self.narrow[StartWithIndex]))
ctx.log.info(s"Initiating upload of $file")
Behaviors.same
case StartWithIndex(file, starWith) =>
val source = FileIO.fromPath(Paths.get(file), chunkSize = 8192, starWith)
val ref = source.runWith(StreamRefs.sourceRef())
ctx.log.info(s"Starting upload of $file")
receiver.tell(Receiver.Upload(file, ref))
Behaviors.same
}
}
}
object Receiver {
sealed trait Command
case class InitUpload(file: String, replyTo: ActorRef[Sender.StartWithIndex]) extends Command
case class Upload(file: String, fileSource: SourceRef[ByteString]) extends Command
val behavior: Behavior[Receiver.Command] = Behaviors.setup[Receiver.Command] { ctx =>
implicit val materializer: Materializer = SystemMaterializer(ctx.system).materializer
Behaviors.receiveMessage {
case InitUpload(path, replyTo) =>
val file = fileAtDestination(path)
val index = if (file.exists()) file.length else 0
ctx.log.info(s"Got init command for $file at pointer $index")
replyTo.tell(Sender.StartWithIndex(path, index.toLong))
Behaviors.same
case Upload(path, fileSource) =>
val file = fileAtDestination(path)
val sink = if (file.exists()) {
FileIO.toPath(file.toPath, Set(StandardOpenOption.APPEND, StandardOpenOption.WRITE))
} else {
FileIO.toPath(file.toPath, Set(StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE))
}
ctx.log.info(s"Saving file into ${file.toPath}")
fileSource.runWith(sink)
Behaviors.same
}
}
}
Some auxiliary methods
val destination: File = Files.createTempDirectory("destination").toFile
def fileAtDestination(file: String) = {
val name = new File(file).getName
new File(destination, name)
}
def writeRandomToFile(file: File, size: Int): Unit = {
val out = new FileOutputStream(file, true)
(0 until size).foreach { _ =>
out.write(Random.nextPrintableChar())
}
out.close()
}
And finally some test code
// sender and receiver bootstrapping is omitted
//Create some dummy file to upload
val file: Path = Files.createTempFile("test", "test")
writeRandomToFile(file.toFile, 1000)
//Initiate a new upload
sender.tell(Sender.Upload(file.toAbsolutePath.toString))
// Sleep to allow file upload to finish
Thread.sleep(1000)
//Write more data to the file to emulate a failure
writeRandomToFile(file.toFile, 1000)
//Initiate a new upload that will "recover" from the previous upload
sender.tell(Sender.Upload(file.toAbsolutePath.toString))
Finally, the whole process can be defined as
I'm trying to get a simple proof of concept multi part upload working in Kotlin using the amazon s3 client based on the documentation. The first part uploads successful and I get a response with an etag. The second part doesn't upload a single thing and times out. It always fails after the first part. Is there some connection cleanup that I need to do manually somehow?
Credentials and rights are all fine. The magic numbers below are just to get to the minimum part size of 5MB.
What am I doing wrong here?
fun main() {
val amazonS3 =
AmazonS3ClientBuilder.standard().withRegion(Regions.EU_WEST_1).withCredentials(ProfileCredentialsProvider())
.build()
val bucket = "io.inbot.sandbox"
val key = "test.txt"
val multipartUpload =
amazonS3.initiateMultipartUpload(InitiateMultipartUploadRequest(bucket, key))
var pn=1
var off=0L
val etags = mutableListOf<PartETag>()
for( i in 0.rangeTo(5)) {
val buf = ByteArrayOutputStream()
val writer = buf.writer().buffered()
for(l in 0.rangeTo(100000)) {
writer.write("part $i - Hello world for the $l'th time this part.\n")
}
writer.flush()
writer.close()
val bytes = buf.toByteArray()
val md = MessageDigest.getInstance("MD5")
md.update(bytes)
val md5 = Base64.encodeBytes(md.digest())
println("going to write ${bytes.size}")
bytes.inputStream()
var partRequest = UploadPartRequest().withBucketName(bucket).withKey(key)
.withUploadId(multipartUpload.uploadId)
.withFileOffset(off)
.withPartSize(bytes.size.toLong())
.withPartNumber(pn++)
.withMD5Digest(md5)
.withInputStream(bytes.inputStream())
.withGeneralProgressListener<UploadPartRequest> { it ->
println(it.bytesTransferred)
}
if(i == 5) {
partRequest = partRequest.withLastPart(true)
}
off+=bytes.size
val partResponse = amazonS3.uploadPart(partRequest)
etags.add(partResponse.partETag)
println("part ${partResponse.partNumber} ${partResponse.eTag} ${bytes.size}")
}
val completeMultipartUpload =
amazonS3.completeMultipartUpload(CompleteMultipartUploadRequest(bucket, key, multipartUpload.uploadId, etags))
}
This always fails on the second part with
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (Service: Amazon S3; Status Code: 400; Error Code: RequestTimeout; Request ID: F419872A24BB5526; S3 Extended Request ID: 48XWljQNuOH6LJG9Z85NJOGVy4iv/ru44Ai8hxEP+P+nqHECXZwWNwBoMyjiQfxKpr6icGFjxYc=), S3 Extended Request ID: 48XWljQNuOH6LJG9Z85NJOGVy4iv/ru44Ai8hxEP+P+nqHECXZwWNwBoMyjiQfxKpr6icGFjxYc=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1630)
Just to preempt some of the answers I'm not looking for, my intention with this is NOT to upload files but to eventually be able to stream arbitrary length streams to s3 by simply uploading parts until done and then combining them. So, I can't really use the TransferManager because that requires me to know the size in advance, which I won't. Also, buffering this as a file is not something I want to do since this will run in a dockerized server application. So I really want to upload an arbitrary number of parts. I'm happy to do it sequentially; though I wouldn't mind parallelism.
I've also used "com.github.alexmojaki:s3-stream-upload:1.0.1" but that seems to keep a lot of state in memory (I've ran out a couple of times), so I'd like to replace it with something simpler.
Update. Thanks ilya in the comments below. Removing the withFileOffset fixes things.
Removing withFileOffset fixes things. Thanks #Ilya for pointing this out.
Here's a simple outputstream that I implemented that actually works.
package io.inbot.aws
import com.amazonaws.auth.profile.ProfileCredentialsProvider
import com.amazonaws.regions.Regions
import com.amazonaws.services.s3.AmazonS3
import com.amazonaws.services.s3.AmazonS3ClientBuilder
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult
import com.amazonaws.services.s3.model.PartETag
import com.amazonaws.services.s3.model.UploadPartRequest
import mu.KotlinLogging
import java.io.ByteArrayOutputStream
import java.io.OutputStream
import java.security.MessageDigest
import java.util.Base64
private val logger = KotlinLogging.logger { }
class S3Writer(
private val amazonS3: AmazonS3,
private val bucket: String,
private val key: String,
private val threshold: Int = 5*1024*1024
) : OutputStream(), AutoCloseable {
private val etags: MutableList<PartETag> = mutableListOf()
private val multipartUpload: InitiateMultipartUploadResult = this.amazonS3.initiateMultipartUpload(InitiateMultipartUploadRequest(bucket, key))
private val currentPart = ByteArrayOutputStream(threshold)
private var partNumber = 1
override fun write(b: Int) {
currentPart.write(b)
if(currentPart.size() > threshold) {
sendPart()
}
}
private fun sendPart(last: Boolean = false) {
logger.info { "sending part $partNumber" }
currentPart.flush()
val bytes = currentPart.toByteArray()
val md = MessageDigest.getInstance("MD5")
md.update(bytes)
val md5 = Base64.getEncoder().encode(md.digest())
var partRequest = UploadPartRequest().withBucketName(bucket).withKey(key)
.withUploadId(multipartUpload.uploadId)
.withPartSize(currentPart.size().toLong())
.withPartNumber(partNumber++)
.withMD5Digest(md5.contentToString())
.withInputStream(bytes.inputStream())
if(last) {
logger.info { "final part" }
partRequest = partRequest.withLastPart(true)
}
val partResponse = amazonS3.uploadPart(partRequest)
etags.add(partResponse.partETag)
currentPart.reset()
}
override fun close() {
if(currentPart.size() > 0) {
sendPart(true)
}
logger.info { "completing" }
amazonS3.completeMultipartUpload(CompleteMultipartUploadRequest(bucket, key, multipartUpload.uploadId, etags))
}
}
fun main() {
val amazonS3 =
AmazonS3ClientBuilder.standard().withRegion(Regions.EU_WEST_1).withCredentials(ProfileCredentialsProvider())
.build()
val bucket = "io.inbot.sandbox"
val key = "test.txt"
try {
S3Writer(amazonS3, bucket, key).use {
val w = it.bufferedWriter()
for (i in 0.rangeTo(1000000)) {
w.write("Line $i: hello again ...\n")
}
}
} catch (e: Throwable) {
logger.error(e.message,e)
}
}
What I am trying to do:
I am trying to make a combination between Linux udev and Kotlin. More exactly when I plug in a USB into my PC one of the rules from udev will launch a script that will append to a FIFO file some text. (Like: add,003,026. Where 003 is the bus number and the 026 is the device number).
Now on the Kotlin side, I intend to read this information and show it to the IDE console. All good here.
My problem:
When I receive only one event due to only one plugin everything is ok. But when I try to plug in multiple devices ( by pressing the power button on a hub with 7 devices connected ) I usually receive only 3 devices on the Kotlin side. Even if the FIFO file has all the values.
Sample code
Here is my last try of gaining all the information
fun main(args: Array<String>) {
println("Hello, World")
while(true) {
println("I had received this: " + readUsbState())
//println("Am primit inapoi: " + ins.read())
TimeUnit.SECONDS.sleep(1L)
}
}
#Throws(FileNotFoundException::class)
private fun readUsbState(): String {
if (!File("/emy/usb_events").exists()) {
throw FileNotFoundException("The file /emy/usb_events doesn't exists!")
}
val bytes = ByteArrayOutputStream()
var byteRead = 0
val bytesArray = ByteArray(1024)
try {
FileInputStream("/emy/usb_events").use { inputStream ->
byteRead = inputStream.read(bytesArray, 0, bytesArray.size)
if (byteRead >= 0) {
bytes.write(bytesArray, 0, byteRead)
}
}
} catch (ex: IOException) {
ex.printStackTrace()
}
return bytes.toString()
}
More instructions:
My fifo file is "/emy/usb_events". This file was created with mkfifo /emy/usb_events
and for the testing part to don't bother with the udev rules you can simply make echo -e "add,001,001\nadd,001,002\nadd,001,003\n..." >> /emy/usb_events
I have found the correct answer.
The problem was that I was closing the FIFO file after the first enter that was found. The below code works perfectly:
fun main(args: Array<String>) {
println("Hello, World")
while(true) {
println("I had received this: " + readUsbState4())
TimeUnit.SECONDS.sleep(1L)
}
}
private fun readUsbState4(): String {
return File("/certus/usb_events").readLines(Charset.defaultCharset()).toString()
}
In the list that I am receiving, I may have multiple pieces of information like:
Hello, World
I had received this: [add,046,003,4-Port_USB_2.0_Hub,Generic,]
I had received this: [add,048,003,Android_Phone,FA696BN00557,HTC, add,047,003,4-Port_USB_2.0_Hub,Generic,, add,049,003,DataTraveler_2.0,001BFC31A1C7C161D9C75AED,Kingston]
I had received this: [add,050,003,SAMSUNG_Android,06157df6cc9ac70e,SAMSUNG, add,053,003,SAMSUNG_Android,ce0416046d6a9e3f05,SAMSUNG, add,051,003,Acer_S57,0123456789ABCDEF,MediaTek, add,052,003,ACER_Z160,SKU4HI8L4L99N76H,MediaTek]
I had received this: [remove,048,003]
I had received this: [remove,051,003, remove,052,003, remove,049,003, remove,053,003, remove,050,003]
I had received this: [remove,047,003]
I had received this: [remove,046,003]
I'd like to encode a protopuf in python and send it via redis to a java application where I decode it.
Atm I can print the data in the java app and the values are correct. But everytime I receive data I get the following exception:
InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag
I tried it also with Jedis but the data was wrong there. Also tried to send it without the bytearray cast from python but I get the same error here.
Does anyone have an idea concerning this issue?
Code on python side:
tele_bytes = array("B")
// tele_bytes data comes from serial interface
tele_bytes[1] = ser.read()
tele_bytes[2] = ser.read()
raw_data = ''.join(chr(x) for x in [
tele_bytes[1],
tele_bytes[2]
])
gw_id = '12345678'
reading = Reading_raw_data_pb2.ReadingRaw()
reading.timestamp = int(time())
reading.gw_id = gw_id
reading.raw_data = raw_data
reading_string = reading.SerializeToString()
r_server.lpush("testID", bytearray(reading_string))
Code on Java Side:
import com.google.protobuf.InvalidProtocolBufferException;
import redis.clients.jedis.BinaryJedis;
import protos.Reading4Java;
import java.io.IOException;
import java.util.List;
public class SimpleSubByte {
public static Reading4Java.ReadingRaw trRaw = null;
public static void main(String[] args) {
BinaryJedis binaryJedis = new BinaryJedis("test.id.local");
while (true) {
List<byte[]> byteArray = binaryJedis.brpop(0, "testID".getBytes());
for (byte[] e : byteArray) {
// System.out.println(e);
try {
trRaw = Reading4Java.ReadingRaw.parseFrom(e);
} catch (InvalidProtocolBufferException e1) {
e1.printStackTrace();
}
System.out.println(trRaw);
}
}
}
}
Protobuf file python:
package ttm
message ReadingRaw {
required string gw_id = 1; // gateway id (e. g. mac address)
required bytes raw_data = 2; // raw data from serial interface
optional int64 timestamp = 3; // timestamp of data reading from sensor device
}
Protobuf File for java:
package protos;
option java_outer_classname = "TireReading4Java";
message TireReadingRaw {
required string gw_id = 1; // gateway id (e. g. mac address)
required bytes raw_data = 2;
optional int64 timestamp = 3;
}
I'd like to monitor Informatica ETL Workflows from custom Java program, via informatica Development Platform (LMapi), v9.1
I already have got C program, it works fine, but it would be great port to Java.
We get a lot of C DLLs, with asynchronous functions e.g.: JavaLMApi.dll, INFA_LMMonitorServerA (with detailed event-log possibilities)
In header we can see:
PMLM_API_EXTSPEC INFA_API_STATUS
INFA_LMMonitorServerA
(
INFA_UINT32 connectionId,
struct INFA_LMAPI_MONITOR_SERVER_REQUEST_PARAMS *request,
void *clientContext,
INFA_UINT32 *requestId
);
There is no any documentation for this problem, only with this information should I use for soulution.
The question is: how to call/use INFA_LMMonitorServerA in Java? (Dll-load with JNA/JNI is not problem, only the callback).
static INFA_UINT32 nConnectionId = 0;
/* C-"skeleton": */
void GetSrvDetailcallback(struct INFA_API_REPLY_CONTEXT* GetSrvDetailReplyCtxt)
{
INFA_API_STATUS apiRet;
struct INFA_LMAPI_SERVER_DETAILS *serverDetails = NULL;
char *serverStatus = NULL;
/* Check if the return status is Acknowledgement */
if (GetSrvDetailReplyCtxt->returnStatus == INFA_REQUEST_ACKNOWLEDGED)
{
fprintf(stdout, "\nINFA REQUEST ACKNOWLEDGED \n\n",NULL);
return;
}
apiRet = INFA_LMGetServerDetailsResults(GetSrvDetailReplyCtxt, &serverDetails);
/* Check the return code if if is an error */
if (INFA_SUCCESS != apiRet)
{
fprintf(stderr, "Error: INFA_LMGetServerDetailsResults returns %d\n", apiRet);
return;
}
printResults(serverDetails);
}
static void myServer()
{
struct INFA_LMAPI_CONNECT_SERVER_REQUEST_PARAMS_EX connectParamsex;
INFA_API_STATUS apiRet;
struct INFA_LMAPI_LOGIN_REQUEST_PARAMS loginparams;
apiRet = INFA_LMLogin(nConnectionId, &loginparams, NULL);
if (INFA_SUCCESS != apiRet)
{
fprintf(stderr, "Error: INFA_LMLogin returns %d\n", apiRet);
return;
}
struct INFA_LMAPI_MONITOR_SERVER_REQUEST_PARAMS strMonitorRequestParams;
//Only Running Tasks
strMonitorRequestParams.monitorMode = INFA_LMAPI_MONITOR_RUNNING;
strMonitorRequestParams.continuous = INFA_TRUE;
/* Get Server Details */
INFA_UINT32 GetSrvDetailsrequestId = 0;
/* Register a callback function. */
INFA_LMRegisterCallback(INFA_LMAPI_MONITOR_SERVER, &GetSrvDetailcallback);
apiRet = INFA_LMMonitorServerA(nConnectionId, &strMonitorRequestParams, NULL, &GetSrvDetailsrequestId);
if (INFA_SUCCESS != apiRet && INFA_REQUEST_PENDING != apiRet)
{
fprintf(stderr, "Error: INFA_LMMonitorServerA returns %d\n", apiRet);
return;
}
}