How should i send data to java method from mapping with Odi12c - java

Recently I started working with Odi12c procedures, until now there was only work with mappings. Now, I have a mapping with different tables and joins, and I need to do calculations by columns. For that, I must use a java method, so I have something like this:
public void static List<Map<String, String>> seg( List<Map<String, String>> comp) {
for (Map<String, String> map : comp) {
if (total > 0 && min1 != min1_fin) {
rest = total - min1;
total-=min1;
map.replace("min1_fin",rest);
map.replace("total",total);
} else {a= true}
if (a) { //(operation for next column)
if (total > 0 && min2 != min2_fin) {
rest = total - min2;
.
..
...
}
return comp;
}
My list:
KEY TOTAL MIN1 MIN2 MIN1_FIN MIN2_FIN
------ -------- -------- ------- --------- ----------
1 35,14 61,85 91,85 0 0
1 35,14 8,09 58,32 0 0
2 85,67 6 6 0 0
2 85,67 67,6 71,47 0 0
I have thought about putting everything in a package and my code in a procedure directly or in a jar and calling it (I still don't know how).
But is it possible to do that? How can I send the data to my java method that way and read it when I return?

Using Java to do the transformation is not the best pattern if the result needs to be stored in a database. Doing it in SQL will be much more efficient.
Anyway, if you really want to use Java you can pass data from the Source command to the Target command of any Procedure step or KM step by binding it. Here is the doc about it : https://docs.oracle.com/cd/E15586_01/integrate.1111/e12643/procedures.htm#CHDGDJGB
Make sure to select the "Multi-Connections" checkbox in the definition of the Produre. The data will pass though the execution agent.

Related

Design ideas needed for bunch of conditional if-else

I have a simple logic to implement. But not sure if there is a better way to design it, other than simple if-else or switch statements.
There are 4 permissions (consider them boolean variable), which can be true or false. Based on various conditions (permutations of those permissions), i need to return list of String values that need to be displayed on UI for a dropdown field.
So its like this for now -
if(!permission1 && !permission2){return list_of_strings_1;}
else if (permission1 && permission2 && !permission3){return list_of_strings_2;}
and so on. Some of them are just if statements. So multiple conditions maybe true and we have to collect all the list of strings and display them.
Those if elses go on for quite some time (about 100 lines). Each will return different list of strings. Most of it is NOT likely to change in future. So maybe too deep of a design maybe an overkill.
But just wondering how experts would refactor this code (or if they will even refactor it or not). Maybe sticking to switch/if-else is ok?
I don't understand how four flags gives you 100 lines of code. This can be done with a map of 16 entries (or less, if some combinations are invalid and can be mapped to a default). If the string representation is truly a list of strings, one for each possible permission, the solution is even more compact.
The key is an object representing the combination of permissions, and the value is the string representation for that combination. You could create a custom type for the key, but in this example, I'm just using four bits of an integer, where each bit indicates whether the permission is granted or not:
private static final int P1 = 1 << 0, P2 = 1 << 1, P3 = 1 << 2, P4 = 1 << 3;
private static final Map<Integer, String> permissionsToString = Map.ofEntries(
Map.entry( 0, "No permissions granted."),
Map.entry( 1, "Permissions 2-3 revoked."),
Map.entry( 2, "Permission 2 granted."),
...
Map.entry(14, "Permission 1 revoked"),
Map.entry(15, "Superuser"));
public static String toString(boolean p1, boolean p2, boolean p3, boolean p4) {
int key = (p1 ? 0 : 1) << 0
| (p2 ? 0 : 1) << 1
| (p3 ? 0 : 1) << 2
| (p4 ? 0 : 1) << 3;
return permissionsToString.get(key);
}
If you don't understand bits, you can use an EnumSet or define your own value object to represent the key at a higher level. The idea is the same: map all possible combinations (24 = 16) to their corresponding label.

Passing data from tJavaRow to tJava in Talend

I'm using Talend for an integration, I'm wondering it is it possible to pass data from tJavaRow to tJava components.
For example:
tJavaRow component:
String check = input_row.foo;
if( check.contains("Yes")){
String ret = "OK";
return ret;
}
tJava component:
System.out.println(ret);
Is there a way to print ret, which is a result of a computation of a previous component inside a next component?
The solution is to use the globalMap or a tSetGlobalMap
globalMap.put("ret", ret);
and recover it with
globalMap.get("ret");
/!\ IMPORTANT /!\
But note that if you use a tJava in a main flow like
tRowGenerator > row1 > tJava > row2> tLogRow
tRowGenerator generating 10 rows for 1 to 10
tJava like System.out.println("foo");
tLogRow print the numeric value
The output will be
foo
1
2
3
4
5
6
7
8
9
10
The code in tJava is only executed once before the first row is even generated. Checking the generated code, you can see
System.out.println("foo");
....
for(int i = 0; i < 10; i++){
logrow.print(i);
}

How to get 2 counts based on two different values in list and check the two counts are equal or not?

I have a list "List issues" this list will hold all the issues from all the projects.
From this "issues" object i can get issues.Project, Issues.Status inside the loop.
I wanted to do the below mentioned operations.
List<Issue> issues = issueCollector.get().getAppropriateIssues();
for (int i=0;i< issues.size();i++)
{
Issue iss = issues.get(i);
}
eg:
**Project IssueKey Status**
PRJ 1 issKey 1 Closed
PRJ 1 issKey 2 Resolved
PRJ 2 isskey 1 Open
PRJ 3 issKey 1 Closed
PRJ 3 issKey 2 Resolved
PRJ 3 issKey 3 Closed
I wanted to get the count of issues with respect to the PROJECT and store it in a variable. How to get the values like below and store in a collection vairable?
eg : PROJECT | Count(Issues)
PRJ 1 2
PRJ 2 1
PRJ 3 3
To get the count of issues in a project with the status in closed or resolved and store it in a variable. How to get the values like below and store in a collection vairable?
eg :
PROJECT | Count(Issues count whose in CLOSED or RESOLVED)
PRJ 1 2
PRJ 3 3
Then from this two variable, i want to check condition like
if(PRJ1(2 issues) == PRJ1(2 issues(with status)))
{
Add this PROJECT to a LIST of STRING
List<STRING> val = new List();
val.add(PROJECT);
}
For flexibility, (it can be that you have to check open issues or sum of this or that), I advise to introduce a small class IssueStatus which keeps all project issue counts. Java 8 allows you to construct it within another class btw.
class IssueStatus {
int numOfClosed = 0;
int numOfResolved = 0;
int numOfOpen = 0;
// not sure if status is string or enum
addStatusCount(String status) {
// logic to inc the num
// eg if "closed", then use numOfClosed++
}
getNumOfClosed() { return numOfClosed; }
getNumOfResolved() { return numOfResolved; }
getNumOfOpen() { return numOfOpen; }
getTotalIssues() { return numOfClosed + numOfResolved + numOfOpen; }
}
You can consider to add a project name to the object. But here, I've used a map to associate a given status to a project.
Map<String, IssueStatus> issueStatusMap = new ...
To populate the map, just use your loop
for (int i=0;i< issues.size();i++) {
Issue iss = issues.get(i);
// check if given project is already in map -> if not, add IssueStatus instance
if (! issueStatusMap.contains(iss.Project)) {
issueStatusMap.put(iss.Project, new IssueStatus());
}
// add issue status cound
issueStatusMap.get(iss.Project).addStatusCount(iss.Status);
}
You can use java 8's stream().forEach( ... ) to fill in the map though. Now, it's easy to have statistic information from your map.
// now you only have to get the data simply
// 1) sum of issues
for(Map.Entry<String, IssueStatus> entry : issueStatusMap.entrySet()) {
s.o.p("project name: " + entry.getKey() + " has " + entry.getValue().getTotalIssues());
}
// or use the sum of the three getNum... methods
// 2) count only closed + resolved
for(Map.Entry<String, IssueStatus> entry : issueStatusMap.entrySet()) {
IssueStatus is = entry.getValue();
s.o.p("project name: " + entry.getKey() + " status count: closed + resolved = " + (is.getNumOfClosed() + is.getNumOfResolved()));
}
Of course you can do all java 8's stream and group by, but I don't advise it because you have to perform another loop each time you're doing your task. This can be an exhaustive operation if the list of issues is very large.
Like in this example, if you want to get sum of counts and sum of "closed" and "resolved" issues by using Collectors.groupingBy, then you're going through that issue list two times. My solution requires one looping, with the cost of some extra heap space to store the objects. And when gathering the data, another small loop is used to go through all project status object instead of all issues. (if there are 100 projects with 5000 issues, then there is a big win)
Finally, to answer your last thing (I admit that this one isn't clear for me)
if(PRJ1(2 issues) == PRJ1(2 issues(with status)))
which is simply
IssueStatus status = issueStatusMap.get("<your projectName>");
if( status.getNum... == status.getNum... ) {
// do something
}
Use java8 collectors api for perform the grouping. check link https://www.mkyong.com/java8/java-8-collectors-groupingby-and-mapping-example/
A simple approach; assuming that you actually have a Project class; you can use a Map<Project, List<Issues>> and from there:
Map<Project, List<Issues>> issuesByProject = new HashMap<>();
for (Issue issue : issues) {
if (issue status ... can be ignored) {
continue;
}
Project proj = issue.getProject();
if (issuesByProject.containsKey(proj)) {
issuesByProject.get(proj).add(issue);
} else {
List newListForProject = new ArrayList<>();
newListForProject.add(issue);
issuesByProject.put(proj, newListForProject);
}
}
This code iterates your list (using the simpler and to-be-preferred for-each looping style). Then we first check if that issue needs to be processed (by checking its status for example). If not, we stop that loop iteration and hop to the next one (using continue). If processing is required, we check if that map contains a list for the current project; if so, we simply add that issue. If not, we create a new list, add the issue, and then put the list into the map.

update ranking in parent child based data

I have a table where user_id and parent_user_id is stored. For example:
user_id parent_user_id calls designation
---------------------------------------------------
1 0 10 Tech Support
2 1 5 Sr. Tech Support
3 2 11 Tech Support
4 2 12 Tech Support
5 4 10 Tech Support
Scenario is, if a user who has 2 children with 10 calls each, he will get an designation change like Sr. Tech Support. And If he has 10 such callers, it will be Manager.
To do this so far what I have done(java),
#Override
public boolean updateDesignation(int userId, int depth) {
// check whether maximum depth is reached
if (depth == 0)
return false;
depth--;
int userIds = getIds(userId);//Will get parent_id
String LOCAL_SQL = SQLconstants.getSQL("get-total-calls.sql");
if(userIds>0) {
int calls = jdbcTemplate.queryForObject(LOCAL_SQL, Integer.class, userIds);
// I get 4's calls with which I need to see if I have 2 users with 10 calls each!
updateDesignation(userIds, depth);
}
//updateRanks(userId, depth);
return true;
}
If I pass 5 as user_id, and 4 as depth. It will go till user_id and update values. And how it works is 5->4, 4->2, 2->1. But what I need to achieve is 5->4, and check 4's child's calls. same like 3, 2, 1. How can I do this? Please help me.
if(userIds>=0) { // process 0 too, as it is a parent
/* execute this sql
SELECT COUNT(*) FROM tablename WHERE parent_user_id=userIds AND calls>=10;
then check if the returned value is >= 2 or other chekings...
and update designations..
*/
updateDesignation(userIds, depth);
}
In this way u dont need to get calls of each parent. So this line is not needed anymore:
int calls = jdbcTemplate.queryForObject(LOCAL_SQL, Integer.class, userIds);

diverging results from weka training and java training

I'm trying to create an "automated trainning" using weka's java api but I guess I'm doing something wrong, whenever I test my ARFF file via weka's interface using MultiLayerPerceptron with 10 Cross Validation or 66% Percentage Split I get some satisfactory results (around 90%), but when I try to test the same file via weka's API every test returns basically a 0% match (every row returns false)
here's the output from weka's gui:
=== Evaluation on test split ===
=== Summary ===
Correctly Classified Instances 78 91.7647 %
Incorrectly Classified Instances 7 8.2353 %
Kappa statistic 0.8081
Mean absolute error 0.0817
Root mean squared error 0.24
Relative absolute error 17.742 %
Root relative squared error 51.0603 %
Total Number of Instances 85
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure ROC Area Class
0.885 0.068 0.852 0.885 0.868 0.958 1
0.932 0.115 0.948 0.932 0.94 0.958 0
Weighted Avg. 0.918 0.101 0.919 0.918 0.918 0.958
=== Confusion Matrix ===
a b <-- classified as
23 3 | a = 1
4 55 | b = 0
and here's the code I've using on java (actually it's on .NET using IKVM):
var classifier = new weka.classifiers.functions.MultilayerPerceptron();
classifier.setOptions(weka.core.Utils.splitOptions("-L 0.7 -M 0.3 -N 75 -V 0 -S 0 -E 20 -H a")); //these are the same options (the default options) when the test is run under weka gui
string trainingFile = Properties.Settings.Default.WekaTrainingFile; //the path to the same file I use to test on weka explorer
weka.core.Instances data = null;
data = new weka.core.Instances(new java.io.BufferedReader(new java.io.FileReader(trainingFile))); //loads the file
data.setClassIndex(data.numAttributes() - 1); //set the last column as the class attribute
cl.buildClassifier(data);
var tmp = System.IO.Path.GetTempFileName(); //creates a temp file to create an arff file with a single row with the instance I want to test taken from the arff file loaded previously
using (var f = System.IO.File.CreateText(tmp))
{
//long code to read data from db and regenerate the line, simulating data coming from the source I really want to test
}
var dataToTest = new weka.core.Instances(new java.io.BufferedReader(new java.io.FileReader(tmp)));
dataToTest.setClassIndex(dataToTest.numAttributes() - 1);
double prediction = 0;
for (int i = 0; i < dataToTest.numInstances(); i++)
{
weka.core.Instance curr = dataToTest.instance(i);
weka.core.Instance inst = new weka.core.Instance(data.numAttributes());
inst.setDataset(data);
for (int n = 0; n < data.numAttributes(); n++)
{
weka.core.Attribute att = dataToTest.attribute(data.attribute(n).name());
if (att != null)
{
if (att.isNominal())
{
if ((data.attribute(n).numValues() > 0) && (att.numValues() > 0))
{
String label = curr.stringValue(att);
int index = data.attribute(n).indexOfValue(label);
if (index != -1)
inst.setValue(n, index);
}
}
else if (att.isNumeric())
{
inst.setValue(n, curr.value(att));
}
else
{
throw new InvalidOperationException("Unhandled attribute type!");
}
}
}
prediction += cl.classifyInstance(inst);
}
//prediction is always 0 here, my ARFF file has two classes: 0 and 1, 92 zeroes and 159 ones
it's funny because if I change the classifier to let's say NaiveBayes the results match the test made via weka's gui
You are using a deprecated way of reading in ARFF files. See this documentation. Try this instead:
import weka.core.converters.ConverterUtils.DataSource;
...
DataSource source = new DataSource("/some/where/data.arff");
Instances data = source.getDataSet();
Note that that documentation also shows how to connect to a database directly, and bypass the creation of temporary ARFF files. You could, additionally, read from the database and manually create instances to populate the Instances object with.
Finally, if simply changing the classifier type at the top of the code to NaiveBayes solved the problem, then check the options in your weka gui for MultilayerPerceptron, to see if they are different from the defaults (different settings can cause the same classifier type to produce different results).
Update: it looks like you're using different test data in your code than in your weka GUI (from a database vs a fold of the original training file); it might also be the case that the particular data in your database actually does look like class 0 to the MLP classifier. To verify whether this is the case, you can use the weka interface to split your training arff into train/test sets, and then repeat the original experiment in your code. If the results are the same as the gui, there's a problem with your data. If the results are different, then we need to look more closely at the code. The function you would call is this (from the Doc):
public Instances trainCV(int numFolds, int numFold)
I had the same Problem.
Weka gave me different results in the Explorer compared to a cross-validation in Java.
Something that helped:
Instances dataSet = ...;
dataSet.stratify(numOfFolds); // use this
//before splitting the dataset into train and test set!

Categories

Resources