I tried to get a random element from the Map like this
IMap<Integer, Integer> workWaitTasks = hazelcastInstance.getMap(WORK_WAIT_TASKS);
collectionTask = Collections.singleton(workWaitTasks.values().stream()
.skip(workWaitTasks.isEmpty() ? 0 : new Random().nextInt(workWaitTasks.size()))
.findFirst()
.get());
int taskId = collectionTask.iterator().next();
but the best way I think is using predicates
I read this https://docs.hazelcast.com/imdg/4.2/query/how-distributed-query-works.html#querying-with-sql-like-predicates
Unfortunately this didn't help me
I couldn't find a way to do this
in sql it is like this
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
how to make correct predicate in hazelcast?
Can you give an example please? Help me please
I think there's no straightforward and effective way to do this using public API. One option is using Jet:
IMap<Object, Object> sourceMap = instance.getMap("table");
IList<Object> targetList = instance.getList("result");
int samplingFactor = sourceMap.size() / 10;
Pipeline p = Pipeline.create();
p.readFrom(Sources.map(sourceMap))
.filter(item -> item.getKey().hashCode() % samplingFactor == ThreadLocalRandom.current().nextInt(samplingFactor))
.writeTo(Sinks.list(targetList));
instance.newJob(p).join();
The code above should add approximately 10 elements to the result list, from which it's easy to get a random entry, but it can also end up with an empty list - you can experiment with increasing the multiplier in samplingFactor to get satisfactory probability of getting some result without having to re-run the job.
The same might also be possible using aggregation (IMap.aggregate) with a custom aggregator, some colleague might provide answer :-).
Related
using QueryBuilders.prefixQuery i'm trying to get the list of book title that starts with "L" or "J" is there any way to achieve that?
I know that QueryBuilders.prefixQuery can accept only string like boolQueryBuilder.must(QueryBuilders.prefixQuery("bookTitle", "L")); is there any othere simple way to achieve that?
you can use the boolean clause should and combine two prefix query one for L and one for J which will provide your expected search results, means books with starts from either L or J.
In java code it will look below:
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
PrefixQueryBuilder lPrefixQueryBuilder = new PrefixQueryBuilder("title","L");
PrefixQueryBuilder jPrefixQueryBuilder = new PrefixQueryBuilder("title","J");
boolQueryBuilder.should(lPrefixQueryBuilder);
boolQueryBuilder.should(jPrefixQueryBuilder);
I wasn't sure about the title but what I want is the following:
The table shop_offer_time_period tells how low an offer is valid. This is done by valid_from_day and valid_until_day as well as by day_of_week_id.
Such a time period is always valid for n * 7 days. This means that if I am fetching from 2000-01-01 until 3000-01-01 I might get the same shop_offer multiple times if shop_offer_time_period says for example that an offer is valid every monday from 2015-01-01 until 2018-01-01.
This is why I want to fetch this in a map like this
// ..
"2015-01-05": [{"offer_id":1}],
"2015-01-06": [{"offer_id":2}, {"offer_id":3}],
"2015-01-07": [],
"2015-01-08": [],
"2015-01-09": [],
"2015-01-10": [],
"2015-01-11": [],
"2015-01-12": [{"offer_id":1}],
"2015-01-13": [{"offer_id":2}, {"offer_id":3}],
// ..
I'd like to know if I can provide a RecordMapper which returns a list of keys for a single record.
The following is an example how my fetch currently looks like. At the moment I am taking fetchInto and do the mapping elsewhere. However, if it is somehow possible, I'd like to do the mapping right here inside my repository.
Table<?> asTable = this.ctx.select(
SHOP_OFFER_TIME_PERIOD.SHOP_OFFER_ID,
SHOP_OFFER_TIME_PERIOD.PRICE
)
.from(SHOP_OFFER_TIME_PERIOD)
.where(
SHOP_OFFER_TIME_PERIOD.SHOP_ID.eq(shopId)
.and(
// Contained
SHOP_OFFER_TIME_PERIOD.VALID_FROM_DAY.greaterOrEqual(fromDay)
.and(SHOP_OFFER_TIME_PERIOD.VALID_UNTIL_DAY.lessOrEqual(toDay))
// Overlapping from left
.or(SHOP_OFFER_TIME_PERIOD.VALID_FROM_DAY.lt(fromDay)
.and(SHOP_OFFER_TIME_PERIOD.VALID_UNTIL_DAY.gt(fromDay))
// Overlapping from right
.or(SHOP_OFFER_TIME_PERIOD.VALID_FROM_DAY.lt(toDay)
.and(SHOP_OFFER_TIME_PERIOD.VALID_UNTIL_DAY.gt(toDay))))))
.asTable("shopOfferTimePeriod");
List<ShopOfferDTO> fetchInto = this.ctx.select(
SHOP_OFFER.ID,
SHOP_OFFER.SHOP_ID,
SHOP_OFFER.SHOP_TIMES_TYPE_ID,
asTable.field(SHOP_OFFER_TIME_PERIOD.DAY_OF_WEEK_ID),
asTable.field(SHOP_OFFER_TIME_PERIOD.PRICE)
)
.from(SHOP_OFFER)
.join(asTable)
.on(asTable.field(SHOP_OFFER_TIME_PERIOD.SHOP_OFFER_ID).eq(SHOP_OFFER.ID)
.and(SHOP_OFFER.SHOP_TIMES_TYPE_ID.eq(offerType)))
.fetchInto(ShopOfferDTO.class);
Please note that I am already fetching the result into my DTO instead of a generated record object.
There are many ways how this question can be answered. Probably, you're looking for a SQL solution, but the way this question is currently phrased, this answer might do.
The most straightforward way would be to offer you these two alternatives of creating in-memory grouping in Java, after fetching the data from the database:
Using jOOQ API:
Map<Date, List<Integer>> result =
this.ctx.select(
...
)
.from(SHOP_OFFER).join(...)
.fetchGroups(SHOP_OFFER.DATE_COLUMN, SHOP_OFFER.ID);
(I'm assuming you have such a DATE_COLUMN here)
Using Java 8
jOOQ integrates seamlessly with Java 8's Stream API (examples can be seen in this blog post), so the most powerful way to solve your problem would probably be to do it with Java 8 Streams:
Map<Date, List<Integer>> result =
this.ctx.select(
...
)
.from(SHOP_OFFER).join(...)
.fetch()
.stream()
.collect(
Collectors.groupingBy(
r -> r.getValue(SHOP_OFFER.DATE_COLUMN),
Collectors.mapping(
r -> r.getValue(SHOP_OFFER.ID),
Collectors.toList()
)
)
)
I am using rally lookback api with java. I am trying to fetch historical data features, sample code that i am using is as shown below.
LookbackApi lookbackApi = new LookbackApi();
lookbackApi.setCredentials("username", "password");
lookbackApi.setWorkspace(47903209423);
lookbackApi.setServer("https://rally1.rallydev.com");
//lookbackApi.setWorkspace("90432948");
LookbackQuery query = lookbackApi.newSnapshotQuery();
query.addFindClause("_TypeHierarchy", "PortfolioItem/Feature");
query.setPagesize(200) // set pagesize to 200 instead of the default 20k
.setStart(200) // ask for the second page of data
.requireFields("ScheduleState", // A useful set of fields for defects, add any others you may want
"ObjectID",
"State",
"Project",
"PlanEstimate",
"_ValidFrom",
"_ValidTo")
.sortBy("_UnformattedID")
.hydrateFields("ScheduleState","State", "PlanEstimate","Project"); // ScheduleState will come back as an OID if it doesn't get hydrated
LookbackResult resultSet = query.execute();
int resultCount = resultSet.Results.size();
Map<String,Object> firstSnapshot = resultSet.Results.get(0);
Iterator<Map<String,Object>> iterator = resultSet.getResultsIterator();
while (iterator.hasNext()) {
Map<String, Object> snapshot = iterator.next();
}
I need a way to put a condition so that it will fetch all the records from history which will have plan estimate changed,but will ignore other history for any feature and underlying user story. I need it this way so that we can track plan estimate change but, will be able to avoid fetching un-necessary data and reduce the time to do this.
I'm not familiar with the java toolkit, but using the raw Lookback API, you would accomplish this with a filter clause like {"_PreviousValues.PlanEstimate": {"$exists": true}}.
Map ifExist = new HashMap();
ifExist.put("$exists", true);
// Note:- true is java boolean, be careful with this as string "true" will not work.
query.addFindClause("_PreviousValues.PlanEstimate",ifExist);
Additinally one need to consider adding "_PreviousValues.PlanEstimate" into
.requireFields() in case only "PlanEstimate" is required to hydrated
I am using jedis for redis connect in java.
I want to delete similar pattern keys from redis server using jedis.
e.g.
1. 1_pattern
2. 2_pattern
3. 3_pattern
4. 4_pattern
5. 5_pattern
We can use del(key), but it will delete only one key.
I want something like del("*_pattern")
It should use regex in redis. In your code:
String keyPattern = "*"+"pattern";
// or String keyPattern = "*_"+"pattern";
Set<String> keyList = jedis.keys(keyPattern);
for(String key:keyList){
jedis.del(key);
}
// free redis resource
I think above solution work well.
One of the most efficient way is to reduce the redis calls.
String keyPattern = "*"+"pattern";
Set<String> keys = redis.keys(keyPattern);
if (null != keys && keys.size() > 0) {
redis.del(keys.toArray(new String[keys.size()]));
}
You could combine the DEL key [key ...] command with the KEYS pattern command to get what you want.
For example, you can do this with Jedis like so (pseudocode):
// or use "?_pattern"
jedis.del(jedis.keys("*_pattern"));
But be aware that this operation could take a long time since KEYS is O(N) where N is the number of keys in the database, DEL is O(M) where M is the number of keys, and for each key being deleted that is a list/set/etc, its O(P), where P is the length of the list/set/etc.
See my answer here.
In your case, it's a simple call to deleteKeys("*_pattern");
I want to do the following:
PreparedQuery pq = datastore.prepare(q);
int count = pq.countEntities(FetchOptions.ALL);
But there is no ALL option. So how do I do it?
For context, say I want to count all entry in my table where color is orange.
If I can't do this directly using DatastoreService, can I use Datanucleus's JPA? As in do they support SELECT COUNT(*) ... for the appengine datastore?
You can count total no of record using following code.
com.google.appengine.api.datastore.Query qry = new com.google.appengine.api.datastore.Query("EntityName");
com.google.appengine.api.datastore.DatastoreService datastoreService = DatastoreServiceFactory.getDatastoreService();
int totalCount = datastoreService.prepare(qry).countEntities(FetchOptions.Builder.withDefaults());
i hope it will help you.
The marked answer is not correct, it will max out in 1000
This is how one will get correct count
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
Query query = new Query("__Stat_Kind__");
Query.Filter eqf = new Query.FilterPredicate("kind_name",
Query.FilterOperator.EQUAL,
"MY_ENTITY_KIND");
query.setFilter(eqf);
Entity entityStat = ds.prepare(query).asSingleEntity();
Long totalEntities = (Long) entityStat.getProperty("count");
You can use Google's plugin for DataNucleus, which seems to show support for count()
This Old but should help for new developers seeking for a way out.
The best way to go about this is using Sharding Counter Techniques, as you save on the entity you know would scale with time, use sharding counter to get the total number of record as it is inserted or the entity group is updated by new record, with this you can get the total number of counter and their corresponding counts which will sum up to give the actual count of the total element in the datastore table or kind.
Use this link for help on how to go about it, for better understanding watch the google i/o 2008 on scaling web applications here, after that you move to this documentation on appengine here, so you get the grasp of it quickly, and also there is a github example test too.
For added example use this link Blog Tutorial which explained a simple example.