I want to create a Treeview for an Eclipse-Plugin.
I generated the Treeview with:
_viewer = new TreeViewer(parent,SWT.MULTI | SWT.H_SCROLL | SWT.V_SCROLL | SWT.FULL_SELECTION);
_viewer.setContentProvider(new ViewContentProvider());
_viewer.getTree().setHeaderVisible(true);
_viewer.getTree().setLinesVisible(true);
_viewer.setAutoExpandLevel(1);
TreeViewerColumn column = new TreeViewerColumn(_viewer, SWT.NONE);
column.getColumn().setText("Package / CCID");
column.getColumn().setWidth(120);
column.setLabelProvider(new ColumnLabelProvider(){
#Override
public String getText(Object element) {
return "test";
}
}
);
column = new TreeViewerColumn(_viewer, SWT.NONE);
column.getColumn().setText("Stage");
column.getColumn().setWidth(100);
column.setLabelProvider(new ColumnLabelProvider(){
public String getText(Object element) {
return "test";
}
});
Now I want to fill the Treeview with Data from DB2.
DB2 contains a table with the name "Package" and a table with the name CCID.
At first I want to list all packages in the table. Then I want to expand the package and show all CCID´s for each package.
For example:
+ package 1
+ package 2
+ package 3
and expended:
- package 1
ccid 1
ccid 2
ccid 3
- package 2
ccid 54
ccid 34
ccid 23
- package 3
ccid 32
ccid 23
ccid 23
Is there any idea to solve my problem?
Related
I created a dataset in Spark using Java by reading a csv file. Following is my initial dataset:
+---+----------+-----+---+
|_c0| _c1| _c2|_c3|
+---+----------+-----+---+
| 1|9090999999|NANDU| 22|
| 2|9999999999| SANU| 21|
| 3|9999909090| MANU| 22|
| 4|9090909090|VEENA| 23|
+---+----------+-----+---+
I want to create dataframe as follows (one column having null values):
+---+----+--------+
|_c0| _c1| _c2|
+---+----|--------+
| 1|null| NANDU|
| 2|null| SANU|
| 3|null| MANU|
| 4|null| VEENA|
+---+----|--------+
Following is my existing code:
Dataset<Row> ds = spark.read().format("csv").option("header", "false").load("/home/nandu/Data.txt");
Column [] selectedColumns = new Column[2];
selectedColumns[0]= new Column("_c0");
selectedColumns[1]= new Column("_c2");
ds2 = ds.select(selectedColumns);
which will create dataset as follows.
+---+-----+
|_c0| _c2|
+---+-----+
| 1|NANDU|
| 2| SANU|
| 3| MANU|
| 4|VEENA|
+---+-----+
To select the two columns you want and add a new one with nulls you can use the following:
import org.apache.spark.sql.functions.*;
import org.apache.spark.sql.types.StringType;
ds.select({col("_c0"), lit(null).cast(DataTypes.StringType).as("_c1"), col("_c2")});
Try Following code
import org.apache.spark.sql.functions.{ lit => flit}
import org.apache.spark.sql.types._
val ds = spark.range(100).withColumn("c2",$"id")
ds.withColumn("new_col",flit(null: String)).selectExpr("id","new_col","c2").show(5)
Hope this Helps
Cheers :)
Adding new column with string null value may solve the problem. Try the following code although it's written in scala but you'll get the idea:
import org.apache.spark.sql.functions.lit
import org.apache.spark.sql.types.StringType
val ds2 = ds.withColumn("new_col", lit(null).cast(StringType)).selectExpr("_c0", "new_col as _c1", "_c2")
I have multiple text files that contains information about different programming languages popularity in different countries based off of google searches. I have one text file for each year from 2004 to 2015. I also have a text file that breaks this down into each week (called iot.txt) but this file does not include the country.
Example data from 2004.txt:
Region java c++ c# python JavaScript
Argentina 13 14 10 0 17
Australia 22 20 22 64 26
Austria 23 21 19 31 21
Belgium 20 14 17 34 25
Bolivia 25 0 0 0 0
etc
example from iot.txt:
Week java c++ c# python JavaScript
2004-01-04 - 2004-01-10 88 23 12 8 34
2004-01-11 - 2004-01-17 88 25 12 8 36
2004-01-18 - 2004-01-24 91 24 12 8 36
2004-01-25 - 2004-01-31 88 26 11 7 36
2004-02-01 - 2004-02-07 93 26 12 7 37
My problem is that i am trying to write code that will output the number of countries that have exhibited 0 interest in python.
This is my current code that I use to read the text files. But I'm not sure of the best way to tell the number of regions that have 0 interest in python across all the years 2004-2015. At first I thought the best way would be to create a list from all the text files not including iot.txt and then search that for any entries that have 0 interest in python but I have no idea how to do that.
Can anyone suggest a way to do this?
import java.io.BufferedReader;
import java.io.FileReader;
import java.util.*;
public class Starter{
public static void main(String[] args) throws Exception {
BufferedReader fh =
new BufferedReader(new FileReader("iot.txt"));
//First line contains the language names
String s = fh.readLine();
List<String> langs =
new ArrayList<>(Arrays.asList(s.split("\t")));
langs.remove(0); //Throw away the first word - "week"
Map<String,HashMap<String,Integer>> iot = new TreeMap<>();
while ((s=fh.readLine())!=null)
{
String [] wrds = s.split("\t");
HashMap<String,Integer> interest = new HashMap<>();
for(int i=0;i<langs.size();i++)
interest.put(langs.get(i), Integer.parseInt(wrds[i+1]));
iot.put(wrds[0], interest);
}
fh.close();
HashMap<Integer,HashMap<String,HashMap<String,Integer>>>
regionsByYear = new HashMap<>();
for (int i=2004;i<2016;i++)
{
BufferedReader fh1 =
new BufferedReader(new FileReader(i+".txt"));
String s1 = fh1.readLine(); //Throw away the first line
HashMap<String,HashMap<String,Integer>> year = new HashMap<>();
while ((s1=fh1.readLine())!=null)
{
String [] wrds = s1.split("\t");
HashMap<String,Integer>langMap = new HashMap<>();
for(int j=1;j<wrds.length;j++){
langMap.put(langs.get(j-1), Integer.parseInt(wrds[j]));
}
year.put(wrds[0],langMap);
}
regionsByYear.put(i,year);
fh1.close();
}
}
}
Create a Map<String, Integer> using a HashMap and each time you find a new country while scanning the incoming data add it into the map country->0. Each time you find a usage of python increment the value.
At the end loop through the entrySet of the map and for each case where e.value() is zero output e.key().
Suppose In my LDT(LargeMap) Bin I have following values,
key1, value1
key2, value2
key3, value3
key4, value4
. .
key50, value50
Now, I get my required data using following snippet :
Map<?, ?> myFinalRecord = new HashMap<?, ?>();
// First call to client to get the largeMap associated with the bin
LargeMap largeMap = myDemoClient.getLargeMap(myPolicy, myKey, myLDTBinName, null);
for (String myLDTKey : myRequiredKeysFromLDTBin) {
try {
// Here each get call results in one call to aerospike
myFinalRecord.putAll(largeMap.get(Value.get(myLDTKey)));
} catch (Exception e) {
log.warn("Key does not exist in LDT Bin");
}
}
The problem is here if myRequiredKeysFromLDTBin contains say 20 keys. Then largeMap.get(Value.get(myLDTKey)) will make 20 calls to aerospike.
Thus if I go by retrieval time of 1 ms per transaction , here my one call to retrieve 20 ids from a record will result in 20 calls to aerospike. This will increase my response time to approx. 20 ms !
So is there any way where I can just pass a set of ids to be retrieved from a LDT Bin and it takes only one call to do so ?
There is no direct API to do multi-get. A way of doing this would be call lmap API directly from server multiple time through UDF.
Example 'mymap.lua'
local lmap = require('ldt/lib_lmap');
function getmany(rec, binname, keys)
local resultmap = map()
local keycount = #keys
for i = 1,keycount,1 do
local rc = lmap.exists(rec, binname, keys[i])
if (rc == 1) then
resultmap[keys[i]] = lmap.get(rec, binname, keys[i]);
else
resultmap[keys[i]] = nil;
end
end
return resultmap;
end
Register this lua file
aql> register module 'mymap.lua'
OK, 1 module added.
aql> execute lmap.put('bin', 'c', 'd') on test.demo where PK='1'
+-----+
| put |
+-----+
| 0 |
+-----+
1 row in set (0.000 secs)
aql> execute lmap.put('bin', 'b', 'c') on test.demo where PK='1'
+-----+
| put |
+-----+
| 0 |
+-----+
1 row in set (0.001 secs)
aql> execute mymap.getmany('bin', 'JSON["b","a"]') on test.demo where PK='1'
+--------------------------+
| getmany |
+--------------------------+
| {"a":NIL, "b":{"b":"c"}} |
+--------------------------+
1 row in set (0.000 secs)
aql> execute mymap.getmany('bin', 'JSON["b","c"]') on test.demo where PK='1'
+--------------------------------+
| getmany |
+--------------------------------+
| {"b":{"b":"c"}, "c":{"c":"d"}} |
+--------------------------------+
1 row in set (0.000 secs)
Java Code to invoke this would be
try {
resultmap = myClient.execute(myPolicy, myKey, 'mymap', 'getmany', Value.get(myLDTBinName), Value.getAsList(myRequiredKeysFromLDTBin)
} catch (Exception e) {
log.warn("One of the key does not exist in LDT bin");
}
Value will be set if key exists and it would return NIL if it does not.
Is is possible to have a RowExpander that is not HTML but rather another Row? That is, a row have a expand [+] icon then when expanded, sub rows appear like a "child-row""?
For example I have a List<ModelData> like this:
ModelData column1 = new BaseModelData();
column1.set("Date", "11-11-11");
column1.set("Time", "11:11:11");
column1.set("Code", "abcdef");
column1.set("Status", "OK");
ModelData column2 = new BaseModelData();
column2.set("Date", "11-11-11");
column2.set("Time", "12:11:11");
column2.set("Code", "abcdef");
column2.set("Status", "Failed");
ModelData column3 = new BaseModelData();
column3.set("Date", "11-11-11");
column3.set("Time", "13:11:11");
column3.set("Code", "abcedf");
column3.set("Status", "Failed");
ModelData column4 = new BaseModelData();
column4.set("Date", "11-11-11");
column4.set("Time", "14:11:11");
column4.set("Code", "abcdef");
column4.set("Status", "Failed");
List<ModelData> data = ...
data.add(model1);
data.add(model2);
data.add(model3);
data.add(model4);
And that this will be rendered in the Grid as two columns (Grouped by the Code and Status column):
Date | Time | Code | Status
-------------------------------------
11-11-11 | 11:11:11 | abcedf | OK
[+] 11-11-11 | 12:11:11 | abcedf | Failed
|--->11-11-11 | 13:11:11 | abcedf | Failed
|--->11-11-11 | 14:11:11 | abcedf | Failed
Something like this.
Update:
I was advised that the solution would be to extends the RowExpander class and merge with GridView class.
You can take a look at GroupingView and TreeGrid and customize one of them for you purposes. It is much safer than trying to reuse GridView's rows rendering functionality.
I have an entity class that has an embedded object within it:
#Entity
public class Flight implements Serializable {
/// .... other attributes
#Embedded
#AttributeOverrides({
#AttributeOverride(name = "value", column =
#Column(name = "FLIGHT_TIME")),
#AttributeOverride(name = "dataState", column =
#Column(name = "FLIGHT_TIME_TYPE", length = 20))
})
private DateDataStateValue flightDate;
}
The DateDataStateValue is as follows:
#Embeddable
public class DateDataStateValue implements DataStateValue<Date>, Serializable {
private static final long serialVersionUID = 1L;
#Column(name = "DATASTATE")
#Enumerated(value = EnumType.STRING)
private final DataState dataState;
#Column(name = "DATAVALUE")
#Temporal(TemporalType.TIMESTAMP)
private final Date value;
}
When performing a fetch of Flights from the database, using a CriteriaQuery, and creating an Order object on the time column:
Path<Flight> propertyPath = queryRoot.get("flightDate");
Order order = isAscending() ? criteriaBuilder.asc(propertyPath) : criteriaBuilder.desc(propertyPath);
The ordering is not what I want. For instance, if the flight table has the following values:
Flight 1 | ESTIMATED | 1 Jan 2012
Flight 2 | ESTIMATED | 1 Jan 2011
Flight 3 | ACTUAL | 1 Jan 2010
Flight 4 | ESTIMATED | 1 Jan 2009
The result of an ascending sort will be:
Flight 3 | ACTUAL | 1 Jan 2010
Flight 4 | ESTIMATED | 1 Jan 2009
Flight 2 | ESTIMATED | 1 Jan 2011
Flight 1 | ESTIMATED | 1 Jan 2012
It appears that the default ordering of an #Embedded column is to use the natural ordering of the elements in the order in which they are named in the class. Ie DATASTATE first, then DATAVALUE second.
What I would like to do is whenever the sort property is flightDate, the ordering is the date first, then the state, ie:
Flight 4 | ESTIMATED | 1 Jan 2009
Flight 3 | ACTUAL | 1 Jan 2010
Flight 2 | ESTIMATED | 1 Jan 2011
Flight 1 | ESTIMATED | 1 Jan 2012
Making the DateDataStateValue comparable doesn't affect it, and #orderColumn/#OrderBy don't seem to be the right thing for the job. Does anyone have any ideas?
Thanks in advance.
I didn't even know you could add an order by query on an embeddable property like this. But I wouldn't rely on it, and simply add two orders to your query:
Path<Flight> statePath = queryRoot.get("flightDate.dateState"); // or queryRoot.get("flightDate").get("dateState"): to be tested
Path<Flight> valuePath = queryRoot.get("flightDate.value");
Order[] orders;
if (isAscending()) {
orders = new Order[] {criteriaBuilder.asc(valuePath), criteriaBuilder.asc(statePath) };
}
else {
orders = new Order[] {criteriaBuilder.desc(valuePath), criteriaBuilder.desc(statePath)
}
query.orderBy(orders);
something like "flightDate.value ASC, flightDate.dataState ASC" perhaps, since all you defined was "flightDate", which implies natural ordering of that object