is there has any way to pass some field when parsing a truncated log in Kaitai Struct?
Because if it read a field (type specify to a enum) but value not in there, it will raise a NullPointer Exception.
So I want ask if any way to achieve that just like default: pass attribute in python library Construct
Here is my ksy file:
meta:
id: btsnoop
endian: be
seq:
- id: header
type: header
- id: packets
type: packet
repeat: eos
types:
header:
seq:
- id: iden
size: 8
- id: version
type: u4
- id: datalink_type
type: u4
enum: linktype
packet:
seq:
- id: ori_len
type: u4
- id: include_len
type: u4
- id: pkt_flags
type: u4
- id: cumu_drop
type: u4
- id: timestamp
type: s8
- id: data
size: include_len
type: frame
frame:
seq:
- id: pkt_type
type: u1
enum: pkttype
- id: cmd
type: cmd
if: pkt_type == pkttype::cmd_pkt
- id: acl
type: acl
if: pkt_type == pkttype::acl_pkt
- id: evt
type: evt
if: pkt_type == pkttype::evt_pkt
cmd:
seq:
- id: opcode
type: u2le
- id: params_len
type: u1
- id: params
size: params_len
acl:
seq:
- id: handle
type: u2le
evt:
seq:
- id: status
type: u1
enum: status
- id: total_length
type: u1
- id: params
size-eos: true
enums: <-- I need to list all possible option in every enum?
linktype:
0x03E9: unencapsulated_hci
0x03EA: hci_uart
0x03EB: hci_bscp
0x03EC: hci_serial
pkttype:
1: cmd_pkt
2: acl_pkt
4: evt_pkt
status:
0x0D: complete_D
0x0E: complete_E
0xFF: vendor_specific
Thanks for reply :)
There are still two questions you're facing here :)
Parsing partial / truncated / damaged data
The main problem here is that normally Kaitai Struct compiles .ksy into a code that does the actual parsing in class constructor. That means if a problem arises, boom, you've got no object at all. In most use cases, it is desired behavior, as it actually allows you to be sure that the object is fully initialized. The problem is usually an EOFException, when format wants to read next primitive, but there's no data in the stream left, or, in some more complicated cases, something else.
However, there are some use-cases as you've mentioned, where having "best effort" parsing would be helpful - i.e. you're ok with having half-filled object. Another popular use case for that is the visualizer: it's helpful to show "best effort" there too, as it's better to show user half-parsed result visualized (to aid in locating at error) rather than no result at all (and leave the user with the guesswork).
There's a simple solution for that in Kaitai Struct - you can compile your class with --debug option. This way you'll get a class that has object creation and parsing separated, parsing would be just another method of an object (void _read()). However, this means that you'll have to call parsing method manually. For example, if your original code was:
Btssnoop b = Btssnoop.fromFile("/path/to/file.bin");
System.out.println(b.packets.size());
after you've compiled it with --debug, you'll have to do extra step:
Btssnoop b = Btssnoop.fromFile("/path/to/file.bin");
b._read();
System.out.println(b.packets.size());
and then you can wrap it up in a try/catch block and actually continue processing even after getting IOException:
Btssnoop b = Btssnoop.fromFile("/path/to/file.bin");
try {
b._read();
} catch (IOException e) {
System.out.println("warning: truncated packets");
}
System.out.println(b.packets.size());
There are a few catches, though:
--debug was not yet available for Java target, as of release v0.3; actually, it's not even in public git repository right now, I hope I'll push it soon though.
--debug also does a few extra things, like writing down positions of every attribute, which imposes pretty harsh performance / memory penalty. Tell me if you'll need a switch to compile "separate constructor/parsing" functionality without the rest of --debug functionality - I can think of additional switch to enable just that.
If you need to do continuous parsing incoming packets as they arrive, probably it's a bad idea to store them all in memory and re-parse them all on every update. We're considering event-based parsing model for that one, please tell me if you'd be interested in that one.
Missing enum values and NPE
Current Java implementation translates enums reading into something like
this.pet1 = Animal.byId(_io.readU4le());
where Animal.byId is translated into:
private static final Map<Long, Animal> byId = new HashMap<Long, Animal>(3);
static {
for (Animal e : Animal.values())
byId.put(e.id(), e);
}
public static Animal byId(long id) { return byId.get(id); }
Java Map's get returns null by contract, when no value was found in the map. You should be able to compare that null with something (i.e. other enum value) and get proper true or false. Can you show me where exactly you have NPE problem, i.e. your code, generated code and stack trace?
Related
Trying to communicate with a BLE device (smart lamp).
I use the following dependency:
<dependency>
<groupId>com.github.hypfvieh</groupId>
<artifactId>bluez-dbus</artifactId>
<version>0.1.3</version>
</dependency>
Very interesting library, by far the best I have found so far for BLE, in term of code quality, dependency management, and clarity...
The problem is, I have a working gatttool command like this, running on a Raspberry Pi 4:
gatttool --device=C4:AC:05:42:73:A4 -t random --char-write-req -a 0x1f -n a001037F
... which set the brightness of the lamp to 100%. Note the value of the address (ie. "-a 0x1f"), which corresponds to the attribute "char value handle" in gattool "characteristics":
handle: 0x001e, char properties: 0x28, char value handle: **0x001f**, uuid: **44092842-0567-11e6-b862-0002a5d5c51b**
I try to make the same using bluez-dbus in java. My implementation seems correct, but the lamp doesn't respond. I have the following trace with dbus-monitor :
method call time=1600276508.729104 sender=:1.184 -> destination=org.bluez serial=210 path=/org/bluez/hci0/dev_C4_AC_05_42_73_A4/service001d/**char001e**; interface=org.bluez.GattCharacteristic1; member=WriteValue
array of bytes [
0a 01 03 7f
]
array [
]
method return time=1600276508.776261 sender=:1.5 -> destination=:1.184 serial=6589 reply_serial=210
It looks everything is fine except bluez-dbus pick up the value 0x001e (aka. the "handle" in gatttool characteristics), to drive the lamp, where it should have been 0x001f ("char value handle" in gatttool).
Do you know if this is a bad usage of the library, an error on the device, or what ?
Here is a little excerpt of the code, if you need more you can look here: https://github.com/sebpiller/luke-roberts-lamp-f
BluetoothDevice lampF = manager.getDevices(true)
.stream()
.filter(e -> Objects.equals(e.getAddress(), config.getMac()))
.findFirst()
.get();
....
String uuid = config.getCustomControlService().getUuid();
BluetoothGattService customControlService = Objects.requireNonNull(lampF.getGattServiceByUuid(uuid));
LOG.info("found GATT custom control service {} at UUID {}", customControlService, uuid);
....
String externalApiUuid = config.getCustomControlService().getUserExternalApiEndpoint().getUuid();
externalApi = Objects.requireNonNull(customControlService.getGattCharacteristicByUuid(externalApiUuid));
...
private void sendCommandToExternalApi(LukeRoberts.LampF.Command command, Byte... parameters) {
reconnectIfNeeded();
try {
externalApi.writeValue(/*reversed*/ command.toByteArray(parameters), Collections.emptyMap());
} catch (DBusException e) {
throw new IllegalStateException("unable to change brightness: " + e, e);
}
}
Thanks for your time !
EDIT:
I am an idiotic-dyslexic. 0x0a is not the same as 0xa0.
Sometimes I'd like to crush my head on the wall....
Thanks for your help :)
gattool is one of the eight tools that have been deprecated by BlueZ.
To debug this I would advise using bluetoothctl to workout what the correct paths are for the connected device. A session might look like this:
pi#raspberrypi:~ $ bluetoothctl
[bluetooth]# connect C4:AC:05:42:73:A4
[my lamp]# menu gatt
[my lamp]# select-attribute 44092842-0567-11e6-b862-0002a5d5c51b
[my lamp:/service0032/char0036]# write 0xa0 0x01 0x03 0x7F
Attempting to write /org/bluez/hci0/dev_C4_AC_05_42_73_A4/service0032/char0036
On the command line, to show you all the paths can be done with generic D-Bus tools:
pi#raspberrypi:~ $ busctl tree org.bluez
Once you have the paths then you can do it from the command line with D-Bus.
pi#raspberrypi:~ $ busctl call org.bluez /org/bluez/hci0/dev_DE_82_35_E7_43_BE org.bluez.Device1 Connect
pi#raspberrypi:~ $ busctl call org.bluez /org/bluez/hci0/dev_DE_82_35_E7_43_BE/service0032/char0036 org.bluez.GattCharacteristic1 WriteValue aya{sv} 4 0xa0 0x01 0x03 0x7f 0
Hopefully with the knowledge from these experiments you can better understand what is going on with the Java application.
I have a simple requirement of converting input JSON to flat file in Mule 4 but I am unable to find any solid examples online. I started of creating sample schema as follows but it's not working.
test.ffd schema:
form: FLATFILE
id: 'test'
tag: '1'
name: Request Header Record
values:
- { name: 'aa', type: String, length: 10 }
- { name: 'bb', type: String, length: 8 }
- { name: 'cc', type: String, length: 4 }
dataweave:
%dw 2.0
output application/flatfile schemaPath='test.ffd'
---
{
aa : payload.a,
bb : payload.b,
cc : payload.c
}
Input JSON:
{
"a": "xxx",
"b": "yyy",
"c": "zzz"
}
But it fails saying
Message : "java.lang.IllegalStateException - Need to specify structureIdent or schemaIdent in writer configuration, while writing FlatFile at
4| {
| ...
8| }
How do I do this correctly?
Error message tells you what is missed.
Need to specify structureIdent or schemaIdent in writer configuration
Add one of them and it flatfile or fixedwidth should work fine.
For example, add segmentIdent:
%dw 2.0
output application/flatfile schemaPath = "test1.ffd",
segmentIdent = "test1"
---
payload map (a, index) -> {
aa: a.a,
bb: a.b,
cc: a.c
}
Here is example how to use FIXEDWIDTH properly https://simpleflatservice.com/mule4/FixedWidthSchemaTransformation.html
Assuming you are trying to output a fixed width file, which it looks like you are, change
form: FLATFILE
to
form: FIXEDWIDTH
Keep in mind using this FFD will only work if you have a single structure. You could pass in:
payload map {
aa: $.a,
...
}
If you had a set and it would still work, but if you needed multiple structures you won't be able to use the shorthand schema.
And to explain why you were getting this error, take a look at these docs, reading "Writer properties (for Flat File)":
https://docs.mulesoft.com/mule-runtime/4.2/dataweave-formats#writer_properties_flat_file
Running windows 8.1, Java 1.8, Scala 2.10.5, Spark 1.4.1, Scala IDE (Eclipse 4.4), Ipython 3.0.0 and Jupyter Scala.
I'm relatively new to Scala and Spark and I'm seeing an issue where certain RDD commands like collect and first return the "Task not serializable" error. What's unusual to me is I see that error in Ipython notebooks with the Scala kernel or the Scala IDE. However when I run the code directly in the spark-shell I do not receive this error.
I would like to setup these two environments for more advanced code evaluation beyond the shell. I have little expertise in troubleshooting this type of issue and determining what to look for; if you can provide guidance on how to get started with resolving this kind of issue that would be greatly appreciated.
Code:
val logFile = "s3n://[key:[key secret]#mortar-example-data/airline-data"
val sample = sc.parallelize(sc.textFile(logFile).take(100).map(line => line.replace("'","").replace("\"","")).map(line => line.substring(0,line.length()-1)))
val header = sample.first
val data = sample.filter(_!= header)
data.take(1)
data.count
data.collect
Stack Trace
org.apache.spark.SparkException: Task not serializable
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:315)
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305)
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)
org.apache.spark.SparkContext.clean(SparkContext.scala:1893)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:311)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:310)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
org.apache.spark.rdd.RDD.filter(RDD.scala:310)
cmd49$$user$$anonfun$4.apply(Main.scala:188)
cmd49$$user$$anonfun$4.apply(Main.scala:187)
java.io.NotSerializableException: org.apache.spark.SparkConf
Serialization stack:
- object not serializable (class: org.apache.spark.SparkConf, value: org.apache.spark.SparkConf#5976e363)
- field (class: cmd12$$user, name: conf, type: class org.apache.spark.SparkConf)
- object (class cmd12$$user, cmd12$$user#39a7edac)
- field (class: cmd49, name: $ref$cmd12, type: class cmd12$$user)
- object (class cmd49, cmd49#3c2a0c4f)
- field (class: cmd49$$user, name: $outer, type: class cmd49)
- object (class cmd49$$user, cmd49$$user#774ea026)
- field (class: cmd49$$user$$anonfun$4, name: $outer, type: class cmd49$$user)
- object (class cmd49$$user$$anonfun$4, <function0>)
- field (class: cmd49$$user$$anonfun$4$$anonfun$apply$3, name: $outer, type: class cmd49$$user$$anonfun$4)
- object (class cmd49$$user$$anonfun$4$$anonfun$apply$3, <function1>)
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:81)
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:312)
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305)
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)
org.apache.spark.SparkContext.clean(SparkContext.scala:1893)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:311)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:310)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
org.apache.spark.rdd.RDD.filter(RDD.scala:310)
cmd49$$user$$anonfun$4.apply(Main.scala:188)
cmd49$$user$$anonfun$4.apply(Main.scala:187)
#Ashalynd was right about the fact that sc.textFile already creates and RDD. You don't need sc.parallelize in that case. documentation here
So considering your example, this is what you'll need to do :
// Read your data from S3
val logFile = "s3n://[key:[key secret]#mortar-example-data/airline-data"
val rawRDD = sc.textFile(logFile)
// Fetch the header
val header = rawRDD.first
// Filter on the header than map to clean the line
val sample = rawRDD.filter(!_.contains(header)).map {
line => line.replaceAll("['\"]","").substring(0,line.length()-1)
}.takeSample(false,100,12L) // takeSample returns a fixed-size sampled subset of this RDD in an array
It's better to use the takeSample function :
def takeSample(withReplacement: Boolean, num: Int, seed: Long = Utils.random.nextLong): Array[T]
withReplacement : whether sampling is done with replacement
num : size of the returned sample
seed : seed for the random number generator
Note 1 : the sample is an Array[String], so if you wish to transform it to an RDD, you can use the parallelize function as followed :
val sampleRDD = sc.parallelize(sample.toSeq)
Note 2 : If you wish to take a sample RDD directly from your rawRDD.filter(...).map(...) , you can use the sample function that returns an RDD[T]. Nevertheless, you'll need to specify an fraction of the data you need instead of a specific number.
sc.textFile already creates distributed dataset (check the documentation ). You don't need sc.parallelize in that case, but - as eliasah properly noted - you need to turn the result into an RDD again, if you want to have an RDD.
val selection = sc.textFile(logFile). // RDD
take(100). // collection
map(_.replaceAll("['\"]",""). // use regex to match both chars
map(_.init) // a method that returns all elements except the last
// turn the resulting collection into RDD again
val sample = sc.parallelize(selection)
I tried to make the zenTasks tutorial for the play-java framework (I use the current playframework, which is 2.3.2). As it comes to testing and adding fixtures I'm kind of lost!
The docu states that
Edit the conf/test-data.yml file and start to describe a User:
- !!models.User
email: bob#gmail.com
name: Bob
password: secret
...
And I should download a sample (which is in fact a dead link!)
So I tried myself adding more Users like this:
- !!models.User
email: somemail1#example.com
loginName: test1
- !!models.User
email: somemail2#example.com
loginName: test2
If I then try to load it via
Object load = Yaml.load("test-data.yml");
if (load instanceof List){
List list = (List)load;
Ebean.save(list);
} else {
Ebean.save(load);
}
I get the following Exception:
[error] Test ModelsTest.createAndRetrieveUser failed:
java.lang.IllegalArgumentException: This bean is of type [class
java.util.ArrayList] is not enhanced?, took 6.505 sec [error] at
com.avaje.ebeaninternal.server.persist.DefaultPersister.saveRecurse(DefaultPersister.java:270)
[error] at
com.avaje.ebeaninternal.server.persist.DefaultPersister.save(DefaultPersister.java:244)
[error] at
com.avaje.ebeaninternal.server.core.DefaultServer.save(DefaultServer.java:1610)
[error] at
com.avaje.ebeaninternal.server.core.DefaultServer.save(DefaultServer.java:1600)
[error] at com.avaje.ebean.Ebean.save(Ebean.java:453) [error]
at ModelsTest.createAndRetrieveUser(ModelsTest.java:18) [error]
...
How Am I supposed to load more than one User (or whatever object I wish) and parse them without exception?
In Ebean class save method is overloaded.
save(Object) - expects parameter which is entity (extends Model, has #Entity annotation)
save(Collection) - expects collection of entities.
Yaml.load function returns objecs which can be:
Entity
List of entities
But if we simply do:
Object load = Yaml.load("test-data.yml");
Ebean.save(load);
then save(Object) method will be called. This is because at compile time compiler doesn't know what exactly will Yaml.load return. So above code will throw exception posted is question when there is more then one user in "test-data.yml" file.
But when we cast the result to List as in code provided by OP then everything works good. save(Collection) method is called and all entities are saved correctly. So the code from question is correct.
I have same problem with loading data from "test-data.yml". But I have found solution for this problem. Here is http://kewool.com/2013/07/bugs-in-play-framework-version-2-1-1-tutorial-fixtures/ solution code. But all Ebean.save methods must be replaced with Ebean.saveAll methods.
I switched an existing code base to Java 7 and I keep getting this warning:
warning: File for type '[Insert class here]' created in the last round
will not be subject to annotation processing.
A quick search reveals that no one has hit this warning.
It's not documented in the javac compiler source either:
From OpenJDK\langtools\src\share\classes\com\sun\tools\javac\processing\JavacFiler.java
private JavaFileObject createSourceOrClassFile(boolean isSourceFile, String name) throws IOException {
checkNameAndExistence(name, isSourceFile);
Location loc = (isSourceFile ? SOURCE_OUTPUT : CLASS_OUTPUT);
JavaFileObject.Kind kind = (isSourceFile ?
JavaFileObject.Kind.SOURCE :
JavaFileObject.Kind.CLASS);
JavaFileObject fileObject =
fileManager.getJavaFileForOutput(loc, name, kind, null);
checkFileReopening(fileObject, true);
if (lastRound) // <-------------------------------TRIGGERS WARNING
log.warning("proc.file.create.last.round", name);
if (isSourceFile)
aggregateGeneratedSourceNames.add(name);
else
aggregateGeneratedClassNames.add(name);
openTypeNames.add(name);
return new FilerOutputJavaFileObject(name, fileObject);
}
What does this mean and what steps can I take to clear this warning?
Thanks.
The warning
warning: File for type '[Insert class here]' created in the last round
will not be subject to annotation processing
means that your were running an annotation processor creating a new class or source file using a javax.annotation.processing.Filer implementation (provided through the javax.annotation.processing.ProcessingEnvironment) although the processing tool already decided its "in the last round".
This may be problem (and thus the warning) because the generated file itself may contain annotations being ignored by the annotation processor (because it is not going to do a further round).
The above ought to answer the first part of your question
What does this mean and what steps can I take to clear this warning?
(you figured this out already by yourself, didn't you :-))
What possible steps to take? Check your annotation processors:
1) Do you really have to use filer.createClassFile / filer.createSourceFile on the very last round of the annotaion processor? Usually one uses the filer object inside of a code block like
for (TypeElement annotation : annotations) {
...
}
(in method process). This ensures that the annotation processor will not be in its last round (the last round always being the one having an empty set of annotations).
2) If you really can't avoid writing your generated files in the last round and these files are source files, trick the annotation processor and use the method "createResource" of the filer object (take "SOURCE_OUTPUT" as location).
In OpenJDK test case this warning produced because processor uses "processingOver()" to write new file exactly at last round.
public boolean process(Set<? extends TypeElement> elems, RoundEnvironment renv) {
if (renv.processingOver()) { // Write only at last round
Filer filer = processingEnv.getFiler();
Messager messager = processingEnv.getMessager();
try {
JavaFileObject fo = filer.createSourceFile("Gen");
Writer out = fo.openWriter();
out.write("class Gen { }");
out.close();
messager.printMessage(Diagnostic.Kind.NOTE, "File 'Gen' created");
} catch (IOException e) {
messager.printMessage(Diagnostic.Kind.ERROR, e.toString());
}
}
return false;
}
I modified original example code a bit. Added diagnostic note "File 'Gen' created", replaced "*" mask with "org.junit.runner.RunWith" and set return value to "true". Produced compiler log was:
Round 1:
input files: {ProcFileCreateLastRound}
annotations: [org.junit.runner.RunWith]
last round: false
Processor AnnoProc matches [org.junit.runner.RunWith] and returns true.
Round 2:
input files: {}
annotations: []
last round: true
Note: File 'Gen' created
Compilation completed successfully with 1 warning
0 errors
1 warning
Warning: File for type 'Gen' created in the last round will not be subject to annotation processing.
If we remove my custom note from log, it's hard to tell that file 'Gen' was actually created on 'Round 2' - last round. So, basic advice applies: if in doubt - add more logs.
Where is also a little bit of useful info on this page:
http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/javac.html
Read section about "ANNOTATION PROCESSING" and try to get more info with compiler options:
-XprintProcessorInfo
Print information about which annotations a processor is asked to process.
-XprintRounds Print information about initial and subsequent annotation processing rounds.
I poked around the java 7 compiler options and I found this:
-implicit:{class,none}
Controls the generation of class files for implicitly loaded source files. To automatically generate class files, use -implicit:class. To suppress class file generation, use -implicit:none. If this option is not specified, the default is to automatically generate class files. In this case, the compiler will issue a warning if any such class files are generated when also doing annotation processing. The warning will not be issued if this option is set explicitly. See Searching For Types.
Source
Can you try and implicitly declare the class file.