Is is possible to have a RowExpander that is not HTML but rather another Row? That is, a row have a expand [+] icon then when expanded, sub rows appear like a "child-row""?
For example I have a List<ModelData> like this:
ModelData column1 = new BaseModelData();
column1.set("Date", "11-11-11");
column1.set("Time", "11:11:11");
column1.set("Code", "abcdef");
column1.set("Status", "OK");
ModelData column2 = new BaseModelData();
column2.set("Date", "11-11-11");
column2.set("Time", "12:11:11");
column2.set("Code", "abcdef");
column2.set("Status", "Failed");
ModelData column3 = new BaseModelData();
column3.set("Date", "11-11-11");
column3.set("Time", "13:11:11");
column3.set("Code", "abcedf");
column3.set("Status", "Failed");
ModelData column4 = new BaseModelData();
column4.set("Date", "11-11-11");
column4.set("Time", "14:11:11");
column4.set("Code", "abcdef");
column4.set("Status", "Failed");
List<ModelData> data = ...
data.add(model1);
data.add(model2);
data.add(model3);
data.add(model4);
And that this will be rendered in the Grid as two columns (Grouped by the Code and Status column):
Date | Time | Code | Status
-------------------------------------
11-11-11 | 11:11:11 | abcedf | OK
[+] 11-11-11 | 12:11:11 | abcedf | Failed
|--->11-11-11 | 13:11:11 | abcedf | Failed
|--->11-11-11 | 14:11:11 | abcedf | Failed
Something like this.
Update:
I was advised that the solution would be to extends the RowExpander class and merge with GridView class.
You can take a look at GroupingView and TreeGrid and customize one of them for you purposes. It is much safer than trying to reuse GridView's rows rendering functionality.
Related
I had an unexpected result when try to store with Hibernate (5.6.1) OffsetTime entity properties into Postgresql Time with time zone field.
For ex (if current default zone is +02):
| OffsetTime| Timez |
| -------- | -------- |
| 00:00+01 | 00:00+02 |
| 00:00+02 | 00:00+02 |
| 00:00+03 | 00:00+02 |
Original offset was lost and stored default instead.
I researched two classes:
org.hibernate.type.descriptor.sql.TimeTypeDescriptor
final Time time = javaTypeDescriptor.unwrap( value, Time.class, options );
org.hibernate.type.descriptor.java.OffsetTimeJavaDescriptor
if ( java.sql.Time.class.isAssignableFrom( type ) ) {
return (X) java.sql.Time.valueOf( offsetTime.toLocalTime() );
}
I think, that I had some mistake in understanding this logic, but in another answers I saw recommendation: (LINK)
ZoneOffset zoneOffset = ZoneOffset.systemDefault().getRules()
.getOffset(LocalDateTime.now());
Notification notification = new Notification()
//...
).setClockAlarm(
OffsetTime.of(7, 30, 0, 0, zoneOffset)
);
So, do I must to convert all OffsetTime values to default time zone so that it store correctly?
I created a dataset in Spark using Java by reading a csv file. Following is my initial dataset:
+---+----------+-----+---+
|_c0| _c1| _c2|_c3|
+---+----------+-----+---+
| 1|9090999999|NANDU| 22|
| 2|9999999999| SANU| 21|
| 3|9999909090| MANU| 22|
| 4|9090909090|VEENA| 23|
+---+----------+-----+---+
I want to create dataframe as follows (one column having null values):
+---+----+--------+
|_c0| _c1| _c2|
+---+----|--------+
| 1|null| NANDU|
| 2|null| SANU|
| 3|null| MANU|
| 4|null| VEENA|
+---+----|--------+
Following is my existing code:
Dataset<Row> ds = spark.read().format("csv").option("header", "false").load("/home/nandu/Data.txt");
Column [] selectedColumns = new Column[2];
selectedColumns[0]= new Column("_c0");
selectedColumns[1]= new Column("_c2");
ds2 = ds.select(selectedColumns);
which will create dataset as follows.
+---+-----+
|_c0| _c2|
+---+-----+
| 1|NANDU|
| 2| SANU|
| 3| MANU|
| 4|VEENA|
+---+-----+
To select the two columns you want and add a new one with nulls you can use the following:
import org.apache.spark.sql.functions.*;
import org.apache.spark.sql.types.StringType;
ds.select({col("_c0"), lit(null).cast(DataTypes.StringType).as("_c1"), col("_c2")});
Try Following code
import org.apache.spark.sql.functions.{ lit => flit}
import org.apache.spark.sql.types._
val ds = spark.range(100).withColumn("c2",$"id")
ds.withColumn("new_col",flit(null: String)).selectExpr("id","new_col","c2").show(5)
Hope this Helps
Cheers :)
Adding new column with string null value may solve the problem. Try the following code although it's written in scala but you'll get the idea:
import org.apache.spark.sql.functions.lit
import org.apache.spark.sql.types.StringType
val ds2 = ds.withColumn("new_col", lit(null).cast(StringType)).selectExpr("_c0", "new_col as _c1", "_c2")
I have a table as shown:
I want to transform it into the following table using Spark Java or Spark Scala
make sure you have unique column names, denn you can do :
import or.apache.spark.sql.functions._
table
.select("id","movie",explode(array("cast1", "cast2", "cast3", "cast4")).as("cast"))
.where(col("cast").isNotNull)
With "union":
val table = List(
(101, "ABC", "A", "B", "C", "D"),
(102, "XZY", "G", "J", null, null))
.toDF("ID", "Movie", "Cast1", "Cast2", "Cast3", "Cast4")
val columnsToUnion = List("Cast1", "Cast2", "Cast3", "Cast4")
val result = columnsToUnion.map(name => table.select($"ID", $"Movie", col(name).alias("Cast")).where(col(name).isNotNull))
.reduce(_ union _)
result.show(false)
Output:
+---+-----+----+
|ID |Movie|Cast|
+---+-----+----+
|101|ABC |A |
|102|XZY |G |
|101|ABC |B |
|102|XZY |J |
|101|ABC |C |
|101|ABC |D |
+---+-----+----+
NOTE: Table cannot has several columns with the same name, assuming column names have such pattern: "Cast[i]"
table.groupBy("ID", "Movie")
.agg(collect_list("Cast1", "Cast2", "Cast3", "Cast2").as("cast"))
.withColumn("cast", explode("cast"))
// a side note: you should always avoid duplicate column name in the same DataFrame
Suppose In my LDT(LargeMap) Bin I have following values,
key1, value1
key2, value2
key3, value3
key4, value4
. .
key50, value50
Now, I get my required data using following snippet :
Map<?, ?> myFinalRecord = new HashMap<?, ?>();
// First call to client to get the largeMap associated with the bin
LargeMap largeMap = myDemoClient.getLargeMap(myPolicy, myKey, myLDTBinName, null);
for (String myLDTKey : myRequiredKeysFromLDTBin) {
try {
// Here each get call results in one call to aerospike
myFinalRecord.putAll(largeMap.get(Value.get(myLDTKey)));
} catch (Exception e) {
log.warn("Key does not exist in LDT Bin");
}
}
The problem is here if myRequiredKeysFromLDTBin contains say 20 keys. Then largeMap.get(Value.get(myLDTKey)) will make 20 calls to aerospike.
Thus if I go by retrieval time of 1 ms per transaction , here my one call to retrieve 20 ids from a record will result in 20 calls to aerospike. This will increase my response time to approx. 20 ms !
So is there any way where I can just pass a set of ids to be retrieved from a LDT Bin and it takes only one call to do so ?
There is no direct API to do multi-get. A way of doing this would be call lmap API directly from server multiple time through UDF.
Example 'mymap.lua'
local lmap = require('ldt/lib_lmap');
function getmany(rec, binname, keys)
local resultmap = map()
local keycount = #keys
for i = 1,keycount,1 do
local rc = lmap.exists(rec, binname, keys[i])
if (rc == 1) then
resultmap[keys[i]] = lmap.get(rec, binname, keys[i]);
else
resultmap[keys[i]] = nil;
end
end
return resultmap;
end
Register this lua file
aql> register module 'mymap.lua'
OK, 1 module added.
aql> execute lmap.put('bin', 'c', 'd') on test.demo where PK='1'
+-----+
| put |
+-----+
| 0 |
+-----+
1 row in set (0.000 secs)
aql> execute lmap.put('bin', 'b', 'c') on test.demo where PK='1'
+-----+
| put |
+-----+
| 0 |
+-----+
1 row in set (0.001 secs)
aql> execute mymap.getmany('bin', 'JSON["b","a"]') on test.demo where PK='1'
+--------------------------+
| getmany |
+--------------------------+
| {"a":NIL, "b":{"b":"c"}} |
+--------------------------+
1 row in set (0.000 secs)
aql> execute mymap.getmany('bin', 'JSON["b","c"]') on test.demo where PK='1'
+--------------------------------+
| getmany |
+--------------------------------+
| {"b":{"b":"c"}, "c":{"c":"d"}} |
+--------------------------------+
1 row in set (0.000 secs)
Java Code to invoke this would be
try {
resultmap = myClient.execute(myPolicy, myKey, 'mymap', 'getmany', Value.get(myLDTBinName), Value.getAsList(myRequiredKeysFromLDTBin)
} catch (Exception e) {
log.warn("One of the key does not exist in LDT bin");
}
Value will be set if key exists and it would return NIL if it does not.
I want to create a Treeview for an Eclipse-Plugin.
I generated the Treeview with:
_viewer = new TreeViewer(parent,SWT.MULTI | SWT.H_SCROLL | SWT.V_SCROLL | SWT.FULL_SELECTION);
_viewer.setContentProvider(new ViewContentProvider());
_viewer.getTree().setHeaderVisible(true);
_viewer.getTree().setLinesVisible(true);
_viewer.setAutoExpandLevel(1);
TreeViewerColumn column = new TreeViewerColumn(_viewer, SWT.NONE);
column.getColumn().setText("Package / CCID");
column.getColumn().setWidth(120);
column.setLabelProvider(new ColumnLabelProvider(){
#Override
public String getText(Object element) {
return "test";
}
}
);
column = new TreeViewerColumn(_viewer, SWT.NONE);
column.getColumn().setText("Stage");
column.getColumn().setWidth(100);
column.setLabelProvider(new ColumnLabelProvider(){
public String getText(Object element) {
return "test";
}
});
Now I want to fill the Treeview with Data from DB2.
DB2 contains a table with the name "Package" and a table with the name CCID.
At first I want to list all packages in the table. Then I want to expand the package and show all CCID´s for each package.
For example:
+ package 1
+ package 2
+ package 3
and expended:
- package 1
ccid 1
ccid 2
ccid 3
- package 2
ccid 54
ccid 34
ccid 23
- package 3
ccid 32
ccid 23
ccid 23
Is there any idea to solve my problem?