Capture input parameters for Java Unit Testing - java

I am working with two components, say A and B. Java development takes place in component B. I managed to modify the relevant Java code, generated JAR using Maven, and deployed it to the run-time instance. All is good.
Component A will invoke classes and methods in B. One method, named say, processNode() in B takes the input of type org.w3c.dom.Node. The input is generated in the run-time instance in A. For example, the code in A will invoke the code in B as follows:
... This is the code of A
... display form
... accept input
... convert the input to an object of type org.w3c.dom.Node
... org.w3c.dom.Node inp = ....
B.Solution sol = new B.Solution();
sol.processNode(inp);
...
...
My objective is to build a unit test and hard-code the value of the variable inp in the unit test to make the development cycle more efficient. This will help me avoid the cycle to code, build, copy the jar to the instance, restart the instance, start the application, provide the input, and check the log ... it is too long.
If I can somehow intercept the value of the input inp in component B and save it in text format for example, then I can hard-code it in the unit test, and convert it to the type org.w3c.dom.Node before invoking B.solution.processNode()
I can see the value of inp in the logs generated by the component A as follows:
<?xml version="1.0" encoding="UTF-8"?>
<TskDta>
<fileNamePath>\path\to\file\filename.json</fileNamePath>
<fileData>{
"TranNumber": 12345,
"PropNumber": 12345,
"DateSigned": "2020-04-28T16:31:51.8937987Z",
"ValuatType": "002",
"BondAmt": 150000.0,
"TtlDNumber": "B2005-rexsd",
"PolNumber": "Policy123",
"MandOffName": "ManOffName123",
"AuthSigName": "Sig123",
"ConveName": "John123",
"CollatId": "Col351",
"AccNumber": "LoanAccontNumber123",
"Curr": " CurrencyType",
"TypOBond": "BondType",
"ProcGrp": "ProcessingGroup23",
"DeeOff": "6",
"BchUnt": "BrhUnit123"
}</fileData>
<archAlw>Y</archAlw>
<ArchFPath>path\to\arch\folder</ArchFPath>
<archiveFilePrefix>CollatLink</archiveFilePrefix>
<maxRet>3</maxRet>
<retryInter>2</retryInter>
<IBSerH>host.name.com</IBSerH>
<IBSerUName>UName</IBSerUName>
<IBSerP>asdcfre345</IBSerP>
<SSLOk>Y</SSLOk>
<httpConnectTimeout>1000</httpConnectTimeout>
<httpReadTimeout>2000</httpReadTimeout>
<endPointBaseURL>https://end.point.host</endPointBaseURL>
<authPartURL>/auht/part/url</authPartURL>
<busFuncPart>/func/part/url</busFuncPart>
<id_client>asdqwe</id_client>
<sec_client>dfsdfsdf</sec_client>
<grtype>cl_cred</grtype>
<scope>adasd</scope>
</TskDta>
I am guessing that I can convert the above XML into an object of type org.w3c.dom.Node which I will base my research on.
I am checking if there is a better way, possibly serializing the variable inp in B and saving it, hard-code the value in the unit test, and then deserializing it to convert it back to the original object type while running the unit test classes.
Is that possible?

I guess you want to convert xml and entity classes to each other.
I see that your inp content contains json format. I can convert json files and entity classes to each other, which may help you.
Here is my code.
// Serialize the inp object into a json file
File save = new File("D:\\test0.json");
ObjectMapper mapper = new ObjectMapper();
Inp inp= new Inp();
inp.setName("sssss");
mapper.writeValue(save, sketch);
// Deserialize the json file into an inp object
Reader reader = new FileReader(save);
Inp inp1 = mapper.readValue(reader, Inp.class);
System.out.println(inp1);

Related

Dealing with command line options in a typesafe manner, after parsing with JOpts

When parsing options with the JOpt Simple library, the only way I know of accessing their respective types is by saving the OptionSpec instances somewhere and accessing them latter. For example:
OptionParser parser = new OptionParser();
ArgumentAcceptingOptionSpec<Path> pathOption = parser
.acceptsAll(Arrays.asList("p", "path"),
"Read message from a file")
.withRequiredArg()
.withValuesConvertedBy(new PathConverter());
/* We will use this one latter */
OptionSpec<Path> pathOptionSpec = pathOption;
OptionSet parsedOptions = parser.parse("-path", "./foo/bar");
/*
See? Unless I missed something, you need the pathOptionSpec object
(which is of type OptionSpec<Path>) to retrieve the "path"'s option type.
If I used .valueOf("path"), it would return
an OptionSet<?> and I would have to typecast it!
*/
assertThat(
parsedOptions.valueOf(pathOptionSpec),
instanceOf(OptionSpec<Path>.class));
Now, what I did was saving the OptionParser, the OptionSet and every OptionSpec as static members of the Main class, and access those options from anywhere in the program (you may now screech and raise your pitchforks now).
Is there a clean way to manage parsed CLI arguments in a typesafe way?

Parse a single POJO from multiple YAML documents representing different classes

I want to use a single YAML file which contains several different objects - for different applications. I need to fetch one object to get an instance of MyClass1, ignoring the rest of docs for MyClass2, MyClass3, etc. Some sort of selective de-serializing: now this class, then that one... The structure of MyClass2, MyClass3 is totally unknown to the application working with MyClass1. The file is always a valid YAML, of course.
The YAML may be of any structure we need to implement such a multi-class container. The preferred parsing tool is snakeyaml.
Is it sensible? How can I ignore all but one object?
UPD: replaced all "document" with "object". I think we have to speak about the single YAML document containing several objects of different structure. More of it, the parser knows exactly only 1 structure and wants to ignore the rest.
UDP2: I think it is impossible with snakeyaml. We have to read all objects anyway - and select the needed one later. But maybe I'm wrong.
UPD2: sample config file
---
-
exportConfiguration781:
attachmentFieldName: "name"
baseSftpInboxPath: /home/user/somedir/
somebool: false
days: 9999
expected:
- ABC w/o quotes
- "Cat ABC"
- "Some string"
dateFormat: yyyy-MMdd-HHmm
user: someuser
-
anotherConfiguration:
k1: v1
k2:
- v21
- v22
This is definitely possible with SnakeYAML, albeit not trivial. Here's a general rundown what you need to do:
First, let's have a look what loading with SnakeYAML does. Here's the important part of the YAML class:
private Object loadFromReader(StreamReader sreader, Class<?> type) {
Composer composer = new Composer(new ParserImpl(sreader), resolver, loadingConfig);
constructor.setComposer(composer);
return constructor.getSingleData(type);
}
The composer parses YAML input into Nodes. To do that, it doesn't need any knowledge about the structure of your classes, since every node is either a ScalarNode, a SequenceNode or a MappingNode and they just represent the YAML structure.
The constructor takes a root node generated by the composer and generates native POJOs from it. So what you want to do is to throw away parts of the node graph before they reach the constructor.
The easiest way to do that is probably to derive from Composer and override two methods like this:
public class MyComposer extends Composer {
private final int objIndex;
public MyComposer(Parser parser, Resolver resolver, int objIndex) {
super(parser, resolver);
this.objIndex = objIndex;
}
public MyComposer(Parser parser, Resolver resolver, LoaderOptions loadingConfig, int objIndex) {
super(parser, resolver, loadingConfig);
this.objIndex = objIndex;
}
#Override
public Node getNode() {
return strip(super.getNode());
}
private Node strip(Node input) {
return ((SequenceNode)input).getValue().get(objIndex);
}
}
The strip implementation is just an example. In this case, I assumed your YAML looks like this (object content is arbitrary):
- {first: obj}
- {second: obj}
- {third: obj}
And you simply select the object you actually want to deserialize by its index in the sequence. But you can also have something more complex like a searching algorithm.
Now that you have your own composer, you can do
Constructor constructor = new Constructor();
// assuming we want to get the object at index 1 (i.e. second object)
Composer composer = new MyComposer(new ParserImpl(sreader), new Resolver(), 1);
constructor.setComposer(composer);
MyObject result = (MyObject)constructor.getSingleData(MyObject.class);
The answer of #flyx was very helpful for me, opening the way to workaround the library (in our case - snakeyaml) limitations by overriding some methods. Thanks a lot! It's quite possible there is a final solution in it - but not now. Besides, the simple solution below is robust and should be considered even if we'd found the complete library-intruding solution.
I've decided to solve the task by double distilling, sorry, processing the configuration file. Imagine the latter consisting of several parts and every part is marked by the unique token-delimiter. For the sake of keeping the YAML-likenes, it may be
---
#this is a unique key for the configuration A
<some YAML document>
---
#this is another key for the configuration B
<some YAML document
The first pass is pre-processing. For the given String fileString and String key (and DELIMITER = "\n---\n". for example) we select a substring with the key-defined configuration:
int begIndex;
do {
begIndex= fileString.indexOf(DELIMITER);
if (begIndex == -1) {
break;
}
if (fileString.startsWith(DELIMITER + key, begIndex)) {
fileString = fileString.substring(begIndex + DELIMITER.length() + key.length());
break;
}
// spoil alien delimiter and repeat search
fileString = fileString.replaceFirst(DELIMITER, " ");
} while (true);
int endIndex = fileString.indexOf(DELIMITER);
if (endIndex != -1) {
fileString = fileString.substring(0, endIndex);
}
Now we feed the fileString to the simple YAML parsing
ExportConfiguration configuration = new Yaml(new Constructor(ExportConfiguration.class))
.loadAs(fileString, ExportConfiguration.class);
This time we have a single document that must co-respond to the ExportConfiguration class.
Note 1: The structure and even the very content of the rest of configuration file plays absolutely no role. This was the main idea, to get independent configurations in a single file
Note 2: the rest of configurations may be JSON or XML or whatever. We have a method-preprocessor that returns a String configuration - and the next processor parses it properly.

Compile time check while passing values to a function in Kotlin Android

I am taking a JSON file as input for a class and parsing the values using gson through respective data classes.
I want to call a function that takes a String value as an argument.
The string value allowed is decided from the values parsed from JSON file. Can I somehow check for that string value passed to the function at compile-time & give an error at compile-time?
Or If I can allow only certain values in the argument for the function based on the values from JSON
Detailed Explanation of use case:
I am building a SDK in which a the person using sdk inputs json String. The json is standardised and is parsed in my code.
{
"name": "Test",
"objects": [
{
"name": "object1",
"type": "object1"
}
]
}
Here name values and other values may vary based on the input by the developer using it but key remains same. But we need to call a function using the value in objects name parameter.
fun testMethod(objectName:String)
So developer calls the testMethod as testMethod(object1).
I need to validate object1 parameter based on json but is there any way possible restricting the test method parameter to object1 only & give error at compile time if the developer calls testMethod(obj1)
Right now I parse JSON & have checks inside the testMethod()
Sure it's possible to do, but somehow in different way, that you described. First of all, as you already mentioned this behavior could be done easily. For this purpose we have Objects.requireNotNull() or Guava.Preconditions(). At the same way you can define you checking but this will work on runtime only.
To do in compile time, you need to create Annotation Preprocessor. The same, as did in different libraries, and one of them, could be Lombok, with their NotNull and Nullable. Android annotation just provide mark and bound for IDE warning, but in their case they adding NotNull checking and throw exception for every annotation usage during compile time.
It's not an easy way, but it's what you are looking for.
No, it's impossible check it in compiler time. It's string handling, as numeric calculation.
In my app, I convert string to JSON and JSON to string, passing class descriptor. My aim is record JSON string in a text file to load in SQLite database. This code I've run in my desktop computer not in Android.
data class calcDescr (
...
)
val calc = CalcDescr(...)
// toJson: internal Kotlin data to JSON
val content = Gson().toJson(calc)
//==================
// Testing validity
// ================
// fromJson: JSON to internal Kotlin data.
// It needs pass the class descriptor. Uses *Java* token, but it's *Kotlin*
var testModel = Gson().fromJson(content, CalcDescr::class.java)
// toJson: internal Kotlin data to JSON again
var contentAgain = Gson().toJson(testModel)
// shoul be equal!
if (content == contentAgain) println("***ok***")
In my file, I write the variable content in a file

is it possible to have a mixed variable in Java like in PHP?

For example in PHP:
<?php
$my_var = 1;
$my_var = [ "1" ]; // no problem
In Java, I tried to make a Class with a mixed argument:
class MyClass {
public String my_var;
}
MyClass c = new MyClass();
c.my_var = "ok";
c.my_var = 1; // error during compilation time
The reason I am asking that is because I am using jersey with json and I try to make a Feedback Class.
One argument in my Feedback class is the data to send back to the front-end in an Ajax manner. Sometimes data is a normal string:
`{ "data" : "hello world" }`
Sometimes it can be an array
`{ "data" : [ "hello", "world" ] }` or `{ "data" : [ { "id": 1 }, { "id": 2 }, ... ] }`
I am a real noobies when it comes to the Java language and libraries. The only solution I can think of now is to override the toString method for the objects involved in the Json encoding but "toString"'ing a List will give me something like that,
`{ "data" : "[ ...]" }` which is incorrect.
you can do this:
Object c = new Object();
c = new String("hi"); //valid
c = new Integer(1); //valid
And then to check what type of variable it is you can use instanceof
You can also make it an array or a list, and if you want to use toString on it generically to print out any object, you can make your own list class with an overridden toString method that prints it nicely, and use that class in your object
using objects with nice toString methods means that you won't have to use instanceof if all you want to do is get a string representation of the object
Java is a statically typed language; and "dynamics" are limited to things such as assigning sub class values to a super class variable, like:
Object o = "a string";
o = new Double(42.0);
So it looks like o can be both, String and Integer. But as said; that is because Object is a super type of both those classes.
This for example:
String s = "meow";
s = 42;
leads to a compiler error. What would work again would be s = Integer.toString(42) for example.
Similarly, you could do:
Object o = new int[5];
because in Java any reference type (including arrays) is a sub class of Object.
And just to be precise: the Java people think this is an advantage. Because it allows more checking at compile time - catching many errors that require you to write/run unit tests in those dynamic languages where a duck can meow like a cat and still be barking dog in the end.
Coming back to your question: from a Java point of view, an array is not the same as a string. And therefore you would not a low such a JSON representation. In other words: I would first try to change the JSON format.
If that is not possible, I would look into customized JSON de/serialization code - in order to have the framework decide which case is given and do the appropriate thing. Just as inspiration, you can have a look how this can be done with gson.
Currently you can't do that in Java yet, so you'd have to make an indirect workaround, but there are plans to implement typeless objects/variables in Java version 10 (currently at 8, almost moved to 9)
Think oops way,
Have an Interface like FeedBackResponse , Have different implementation classes for each of the cases, make them implement FeedBackResponse.
Each class will have different attributes.
Serialize the object using json library.
Ex,
FeedBackResponse response = getResponse("Some Service");
return jsonLib.serialize(response);
Factory
private FeedBackResponse getResponse(String attr){
// based on attr create ur object here.
}

How to group certain file-information into several objects based on field description in Java

I have a pretty complex textfile, build like below:
FieldA-Value
FieldB-Value
.
.
.
FieldC-Value
FieldD-Value
.
.
.
FieldC-Value
...
Now my goal is to build out of every - lets call it Fieldgroup - an Object. I start with the Header (FieldA and FieldB) and readin all the lines until I find the first FieldC. Then everything is stored in an Header Object.
Now the problem: As you can see FieldC and so on appears multiple times. After I stored the header and I go look for the FieldC, it immediately stops (of course) at the first occurence and builds the Object, without all the information that comes behind.
So basically what I want is to get all the Data of all the C-Fields.
I am using a FileReader in combination with LineNumberReader. (Below shortened up code).
fr = new FileReader(file);
lnr = new LineNumberReader(fr);
while((line = lnr.readLine()) != null){
// code to split in fieldnames and values and put them in a map
if(name.equals(A)){
new ObjectFromClassA(map);
}else if(name.equals(C){
//Here is the Problem
new ObjectFromClassB(map);
}
}
Thanks in advance
You should only call new when the name is A. You should call Object.update(map) whenever name equals something else (for example C). You have to write this Object.update() method yourself in the Object class. This method should add the given fields to the instance.

Categories

Resources