I am trying to implement lazy loading with Grid in Vaadin 8. But it only shows an empty table. What do I do wrong? Also, how do I set the number of items to load (limit) to for example 20 items (the default is 40 items)?
private Grid<Image> makeLazyTable()
{
Grid<Image> grid = new Grid<Image>();
DataProvider<Image, Void> dataProvider = DataProvider.fromCallbacks(
query -> {
int offset = query.getOffset();
int limit = query.getLimit();
OffsetRequest request = new OffsetRequest();
request.setLimit(limit);
request.setOffset(offset);
List<QuerySortOrder> sort = query.getSortOrders();
return ImagesRepository.findAll(request, sort);
},
query -> ImagesRepository.getImageCount()
);
grid.setDataProvider(dataProvider);
return grid;
}
I did not add any columns. That was the reason why the table was empty.
This way it works:
Column<Image, String> filenameColumn = grid.addColumn(Image::getFilename);
Column<Image, String> orientationColumn = grid.addColumn(Image::getOrientation);
Column<Image, String> latitudeColumn = grid.addColumn(Image::getLatitude);
Column<Image, String> longitudeColumn = grid.addColumn(Image::getLongitude);
Related
I have a MultiSelectComboBox but when trying to select some items through its select() method nothing happens.
My code looks as follows:
Binder<Technology> binder = new BeanValidationBinder<>(Technology.class);
MultiSelectComboBox<TechnologyLabel> multiComboBox = new MultiSelectComboBox<>("Labels");
binder
.forField(this.multiComboBox)
.bind(Technology::getLabels, Technology::setLabels);
List<TechnologyLabel> labelList = technologyLayout.technologyLabelService.getTechnologyLabels(technology);
List<Label> containedLabels = new ArrayList<>();
for (var tl : labelList) {
containedLabels.add(tl.getLabel());
}
List<Label> labels = labelService.findAllLabels();
for (var label : labels) {
if (!containedLabels.contains(label)) {
labelList.add(new TechnologyLabel(technology, label));
}
}
multiComboBox.setItems(labelList);
multiComboBox.setItemLabelGenerator(TechnologyLabel::getLabelName);
note.setMaxHeight("10em");
multiComboBox.select(labelList.get(0);
What I do above is querying my ManyToMany relation to find all labels given the technology. Then I find all the labels and create a list from that. Lastly, I try to select first item of the list, but it does not show a tick next to it.
I'm using sparkSql 1.6.2 (Java API) and I have to process the following DataFrame that has a list of value in 2 columns:
ID AttributeName AttributeValue
0 [an1,an2,an3] [av1,av2,av3]
1 [bn1,bn2] [bv1,bv2]
The desired table is:
ID AttributeName AttributeValue
0 an1 av1
0 an2 av2
0 an3 av3
1 bn1 bv1
1 bn2 bv2
I think I have to use a combination of the explode function and a custom UDF function.
I found the following resources:
Explode (transpose?) multiple columns in Spark SQL table
How do I call a UDF on a Spark DataFrame using JAVA?
and I can successfully run an example that read the two columns and return the concatenation of the first two strings in a column
UDF2 combineUDF = new UDF2<Seq<String>, Seq<String>, String>() {
public String call(final Seq<String> col1, final Seq<String> col2) throws Exception {
return col1.apply(0) + col2.apply(0);
}
};
context.udf().register("combineUDF", combineUDF, DataTypes.StringType);
the problem is to write the signature of a UDF returning two columns (in Java).
As far as I understand I must define a new StructType as the one shown below and set that as return type, but so far I didn't manage to have the final code working
StructType retSchema = new StructType(new StructField[]{
new StructField("#AttName", DataTypes.StringType, true, Metadata.empty()),
new StructField("#AttValue", DataTypes.StringType, true, Metadata.empty()),
}
);
context.udf().register("combineUDF", combineUDF, retSchema);
Any help will be really appreciated.
UPDATE: I'm trying to implement first the zip(AttributeName,AttributeValue) so then I will need just to apply the standard explode function in sparkSql:
ID AttName_AttValue
0 [[an1,av1],[an1,av2],[an3,av3]]
1 [[bn1,bv1],[bn2,bv2]]
I built the following UDF:
UDF2 combineColumns = new UDF2<Seq<String>, Seq<String>, List<List<String>>>() {
public List<List<String>> call(final Seq<String> col1, final Seq<String> col2) throws Exception {
List<List<String>> zipped = new LinkedList<>();
for (int i = 0, listSize = col1.size(); i < listSize; i++) {
List<String> subRow = Arrays.asList(col1.apply(i), col2.apply(i));
zipped.add(subRow);
}
return zipped;
}
};
But when I run the code
myDF.select(callUDF("combineColumns", col("AttributeName"), col("AttributeValue"))).show(10);
I got the following error message:
scala.MatchError: [[an1,av1],[an1,av2],[an3,av3]] (of class java.util.LinkedList)
and it looks like the combining has been performed correctly but then the return type is not the expected one in Scala.
Any Help?
Finally I managed to get the result I was looking for but probably not in the most efficient way.
Basically the are 2 step:
Zip of the two list
Explode of the list in rows
For the first step I defined the following UDF Function
UDF2 concatItems = new UDF2<Seq<String>, Seq<String>, Seq<String>>() {
public Seq<String> call(final Seq<String> col1, final Seq<String> col2) throws Exception {
ArrayList zipped = new ArrayList();
for (int i = 0, listSize = col1.size(); i < listSize; i++) {
String subRow = col1.apply(i) + ";" + col2.apply(i);
zipped.add(subRow);
}
return scala.collection.JavaConversions.asScalaBuffer(zipped);
}
};
Missing the function registration to SparkSession:
sparkSession.udf().register("concatItems",concatItems,DataTypes.StringType);
and then I called it with the following code:
DataFrame df2 = df.select(col("ID"), callUDF("concatItems", col("AttributeName"), col("AttributeValue")).alias("AttName_AttValue"));
At this stage the df2 looks like that:
ID AttName_AttValue
0 [[an1,av1],[an1,av2],[an3,av3]]
1 [[bn1,bv1],[bn2,bv2]]
Then I called the following lambda function for exploding the list into rows:
DataFrame df3 = df2.select(col("ID"),explode(col("AttName_AttValue")).alias("AttName_AttValue_row"));
At this stage the df3 looks like that:
ID AttName_AttValue
0 [an1,av1]
0 [an1,av2]
0 [an3,av3]
1 [bn1,bv1]
1 [bn2,bv2]
Finally to split the attribute name and value into two different columns, I converted the DataFrame into a JavaRDD in order to use the map function:
JavaRDD df3RDD = df3.toJavaRDD().map(
(Function<Row, Row>) myRow -> {
String[] info = String.valueOf(myRow.get(1)).split(",");
return RowFactory.create(myRow.get(0), info[0], info[1]);
}).cache();
If anybody has a better solution feel free to comment.
I hope it helps.
I would like to specify. May I receive elements only from DynamoDBIndexHashKey, not use DynamoDBHashKey?
I have a table with fields
#DynamoDBIndexHashKey (attributeName = "count", globalSecondaryIndexName = "count-index")
#DynamoDBHashKey(attributeName="cluster_output_Id)"
#DynamoDBRangeKey(attributeName="last_fetch)"
I have no #DynamoDBIndexRangeKey
It's code:
MyEntity myEntity = new MyEntity();
myEntity.setCount(1); // Integer
DynamoDBQueryExpression<NewsDynamoDb> queryExpression = new DynamoDBQueryExpression<NewsDynamoDb>()
.withHashKeyValues(myEntity)
.withIndexName("count-index");
queryExpression.setConsistentRead(false);
List<MyEntity> myCollection = mapper.query(MyEntity.class, queryExpression);
Error:
AmazonServiceException: Status Code: 400, AWS Service: AmazonDynamoDBv2, AWS Request ID: I97S04LDGO6FSF56OCJ8S3K167VV4KQNSO5AEMVJF66Q9ASUAAJG, AWS Error Code: ValidationException, AWS Error Message: One or more parameter values were invalid: Invalid number of argument(s) for the EQ ComparisonOperator
How I can get items from DynamoDBIndexHashKey?
P.s. Scan - work but not interesting to me, because in a further I want a sorting
Query with DynamoDBHashKey work. I have problems with DynamoDBIndexHashKey
same example
It is the answer to my question
entity:
#DynamoDBHashKey(attributeName="cluster_output_Id")
public Integer getCluster_output_Id() {
return cluster_output_Id;
}
#DynamoDBIndexHashKey(attributeName = "count", globalSecondaryIndexName = "count-index")
public Integer getCount() {
return count;
}
#DynamoDBRangeKey(attributeName="last_fetch")
#DynamoDBIndexRangeKey(attributeName = "last_fetch", globalSecondaryIndexName = "count-index")
public Date getLast_fetch() {
return last_fetch;
}
code:
dynamoDBMapper = new DynamoDBMapper(amazonDynamoDBClient);
MyClass myClass= new MyClass();
DynamoDBQueryExpression<MyClass > queryExpression = new DynamoDBQueryExpression<MyClass >();
myClass.setCount(1);
queryExpression.setHashKeyValues(myClass);
queryExpression.withIndexName("count-index"); // it's not necessarily
Condition rangeKeyCondition = new Condition();
rangeKeyCondition.withComparisonOperator(ComparisonOperator.NE)
.withAttributeValueList(new AttributeValue().withS(""));
queryExpression.setConsistentRead(false);
List entities = dynamoDBMapper.query(MyClass.class, queryExpression);
Thank you!
like explained here
Table table = dynamoDB.getTable("tableName");
Index index = table.getIndex("count-index");
ItemCollection<QueryOutcome> items = null;
QuerySpec querySpec = new QuerySpec();
querySpec.withKeyConditionExpression("count= :v_count > 0 ")
.withValueMap(new ValueMap() .withString(":v_count","1");
items = index.query(querySpec);
while (iterator.hasNext()) {
//.......
}
You cannot use Query to find items based on sort/range key only.
You can read more here.
In a Query operation, you use the KeyConditionExpression parameter to determine the items to be read from the table or index. You must specify the partition key name and value as an equality condition. You can optionally provide a second condition for the sort key (if present).
In this case your options are:
Scan operation with last_fetch as filter.
Redesign your database to have a GSI with last_fetch as partition key
Mongodb has an update function, where it can increment pre-existing fields. However, I found that it could only update flat JSON. Whenever there's a JSONObject inside of a JSONObject, with a value I want to increment, I can't actually seem to do it. It will return this error:
com.mongodb.WriteConcernException: Write failed with error code 14 and error message
'Cannot increment with non-numeric argument: {laneQty: { BOTTOM: 1 }}'
As you can see, I tried update incrementing laneQty.BOTTOM by 1. I don't want to write an algorithm to change every single layered json field into dot notation(like laneQty.BOTTOM), so is there a way to either turn the JSON into dot notation pre-upsert?
For now my general upsert function looks like this:
public boolean incrementJson(BasicDBObject json, String colName, ArrayList<String> queryParams, ArrayList<String> removeParams){
/*make sure the game id AND the main player id can't both be the same.
If either/or, it's fine. We don't want duplicates.
*/
BasicDBObject query = new BasicDBObject();
DBCollection collection = db.getCollection(colName);
for(int i = 0; i < queryParams.size(); i++){
String param = queryParams.get(i);
query.put(param, json.get(param));
}
for(String param : removeParams){
json.remove(param);
}
return collection.update(query, new BasicDBObject("$inc", json), true, false).isUpdateOfExisting();
}
Is there any suggested upgrades to this code that could make it easily update layered json as well? Thank you!
By the way, it'll be very hard for me to hardcode this. There are a ton of layered objects and that would take me forever. Also, I am not in complete control of which fields are populated in the layers, so I can't just say laneQty.BOTTOM every single time because it will not always exist. Prior to upserting, the BasicDBObject json was actually a java bean parsed into BasicDBObject. This is its constructor if it's of any help:
public ChampionBean(int rank, int division, int assists, int deaths, int kills, int qty, int championId,
HashMap<String, Integer> laneQty, HashMap<String, Integer> roleQty,
ParticipantTimelineDataBean assistedLaneDeathsPerMinDeltas,
ParticipantTimelineDataBean assistedLaneKillsPerMinDeltas, ParticipantTimelineDataBean creepsPerMinDeltas,
ParticipantTimelineDataBean csDiffPerMinDeltas, ParticipantTimelineDataBean damageTakenDiffPerMinDeltas,
ParticipantTimelineDataBean damageTakenPerMinDeltas, ParticipantTimelineDataBean goldPerMinDeltas,
ParticipantTimelineDataBean xpDiffPerMinDeltas, ParticipantTimelineDataBean xpPerMinDeltas, int wins,
int weekDate, int yearDate) {
super();
this.rank = rank;
this.division = division;
this.assists = assists;
this.deaths = deaths;
this.kills = kills;
this.qty = qty;
this.championId = championId;
this.laneQty = laneQty;
this.roleQty = roleQty;
this.assistedLaneDeathsPerMinDeltas = assistedLaneDeathsPerMinDeltas;
this.assistedLaneKillsPerMinDeltas = assistedLaneKillsPerMinDeltas;
this.creepsPerMinDeltas = creepsPerMinDeltas;
this.csDiffPerMinDeltas = csDiffPerMinDeltas;
this.damageTakenDiffPerMinDeltas = damageTakenDiffPerMinDeltas;
this.damageTakenPerMinDeltas = damageTakenPerMinDeltas;
this.goldPerMinDeltas = goldPerMinDeltas;
this.xpDiffPerMinDeltas = xpDiffPerMinDeltas;
this.xpPerMinDeltas = xpPerMinDeltas;
this.wins = wins;
this.weekDate = weekDate;
this.yearDate = yearDate;
}
The participantTimelineDataBean is another bean with 4 int fields inside of it. I want to increment those fields (so yes it's only 2 layers deep, so if there's a solution with 2 layers deep availability I'll take that too).
Use the dot-notation:
new BasicDBObject("$inc", new BasicDBObject("laneQty.BOTTOM", 1) )
Alternative quick&dirty solution: Just collection.save the whole document under the same _id.
Use this library:
https://github.com/rhalff/dot-object
For example if you have an object like this:
var jsonObject = {
info : {
firstName : 'aamir',
lastName : 'ryu'
email : 'aamiryu#gmail.com'
},
}
then your node.js code would be like this:
var dot = require('dot-object');
var jsonObject = // as above ;-);
var convertJsonObjectToDot = dot.dot(jsonObject);
console.log(convertJsonObjectToDot);
Output will be as shown below:
{
info.firstName : 'aamir',
info.lastName : 'ryu',
info.email : 'aamiryu#gmail.com
}
Please bear with me, this is my first answer on stackoverflow ever, since i was searching for the same thing and i found one solution to it, hope it helps you out.
I'm running a sample java program to query a dynamodb table, the table has about 90000 items but when i get the scan count from java it shows only 1994 items
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = client.scan(scanRequest);
System.out.println("#items:" + result.getScannedCount());
the program outputs #items:1994
but the detail from amazon aws console shows:
Item Count*: 89249
any idea?
thanks
scanning or querying dynamodb only returns maximum of 1MB of data.
the count is the number of return items fit in 1MB. in order to get the whole table, you should aggressively scan the database until the value LastEvaluatedKey is null
Set your book object with correct hash key value, and use DynamoDBMapper to get the count.
DynamoDBQueryExpression<Book> queryExpression = new DynamoDBQueryExpression<Book>()
.withHashKeyValues(book);
dynamoDbMapper.count(Book.class, queryExpression);
This should help . Worked for me
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion("your region").build();
DynamoDB dynamoDB = new DynamoDB(client);
TableDescription tableDescription = dynamoDB.getTable("table name").describe();
tableDescription.getItemCount();
Based on answer from nightograph
private ArrayList<String> fetchItems() {
ArrayList<String> ids = new ArrayList<>();
ScanResult result = null;
do {
ScanRequest req = new ScanRequest();
req.setTableName("table_name");
if (result != null) {
req.setExclusiveStartKey(result.getLastEvaluatedKey());
}
result = amazonDynamoDBClient.scan(req);
List<Map<String, AttributeValue>> rows = result.getItems();
for (Map<String, AttributeValue> map : rows) {
AttributeValue v = map.get("rangeKey");
String id = v.getS();
ids.add(id);
}
} while (result.getLastEvaluatedKey() != null);
System.out.println("Result size: " + ids.size());
return ids;
}
I agreed with nightograph. I thinks this link is useful.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html
I just tested with this example. Anyway this is the Dyanamodb v2.
final ScanRequest scanRequest = new ScanRequest()
.withTableName("table_name");
final ScanResult result = dynamoDB.scan(scanRequest);
return result.getCount();