I use this following code to extract all of the Keys start with "NAME:" and it return only over 5,000 records (There is over 60,000 keys in my index). Can anyone explain why it is happening or how can I extract all of the keys from Redis Database.
jedis.select(3);
Set<String> names=jedis.keys("NAME:*");
Iterator<String> it = names.iterator();
while (it.hasNext()) {
String s = it.next();
System.out.println(s);
}
When Redis server store many record, using jedis.keys() command can make it over-processing. Thus, result in it stop when the task is un-done.
Instead of it, you use jedis.hscan() so as to avoid above problem.
ScanParams scanParam = new ScanParams();
String cursor = "0";
int i = 0;
scanParam.match(prefix + "*");
scanParam.count(50000); // 2000 or 30000 depend on your
do {
ScanResult<Map.Entry<String, String>> hscan = jedis.hscan(key, cursor, scanParam);
// any code here
//
//
// check cursor
cursor = hscan.getStringCursor();
} while("0".equals(cursor));
It work well in your case.
You should not use the keys method for normal Redis usage as stated in the JavaDoc below.
http://javadox.com/redis.clients/jedis/2.2.0/redis/clients/jedis/Jedis.html#keys(java.lang.String)
Instead consider using Redis sets like this.
final Jedis jedis = new Jedis("localhost");
jedis.sadd("setName", "someValue");
jedis.smembers("setName");
jedis.close();
Try without NAME in keys search pattern.
Set<String> names = jedis.keys("*");
java.util.Iterator<String> it = names.iterator();
while(it.hasNext()) {
String s = it.next();
System.out.println(s + " : " + jedis.get(s));
}
Related
In Redis cache I have 3 keys
1111-2222-4444
1111-2222-3333
1112-2222-3333
I have a partial key 1111, and I want to return the two keys 1111-2222-4444, 1111-2222-3333
I have the following code
public List<String> getKeys(String partialkey) {
ScanParams scanParams = new ScanParams();
scanParams.match("*");
String cur = redis.clients.jedis.ScanParams.SCAN_POINTER_START;
Jedis jedis = jedisPool.getResource();
List<String> keys = new ArrayList<String>();
boolean cycleIsFinished = false;
while (!cycleIsFinished) {
ScanResult<String> scanResult = jedis.scan(cur, scanParams);
cur = scanResult.getCursor();
for(String key :scanResult.getResult()) {
if(isKey(key, partialkey) ) {
keys.add(key);
}
}
if (cur.equals("0")) {
cycleIsFinished = true;
}
}
return keys;
}
And a method to match for partial key
private boolean isKey(String key, String match) {
String[] fields = key.split("-");
if(match.equals(fields[0])) {
return true;
}
return false;
}
Now this works, but it seems very clunky, there could be thousands of keys, to search through.
My question is, is there a pure redis way to do this.
Where it only returns the two keys, that the partial key matches to.
With thanks to shmosel
public List<String> getKeys(String partialkey) {
ScanParams scanParams = new ScanParams();
scanParams.match(partialkey+"-*");
String cur = redis.clients.jedis.ScanParams.SCAN_POINTER_START;
Jedis jedis = jedisPool.getResource();
List<String> keys = new ArrayList<String>();
boolean cycleIsFinished = false;
while (!cycleIsFinished) {
ScanResult<String> scanResult = jedis.scan(cur, scanParams);
cur = scanResult.getCursor();
for(String key :scanResult.getResult()) {
//if(isKey(key, partialkey) ) {
keys.add(key);
//}
}
if (cur.equals("0")) {
cycleIsFinished = true;
}
}
return keys;
}
Use Redis command KEYS to return all keys matching pattern
KEYS 1111-*
Redis example
Java example
private List<String> getKeys(String partialKey) {
try (Jedis jedis = jedisPool.getResource()) {
return jedis.keys(partialKey + "-*")
.stream()
.toList();
}
}
Be carefull with jedis.keys() method, it could ruin performance:
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using SCAN or sets.
More explanation here, use SCAN is good alternative
I am trying to query using the google public dns server (8.8.8.8) to get the IP address of some known URL. However, it seems like I am not able to get that using the following code? I am using the dnsjava java library. This is my current code
The results
Lookup lookup = new Lookup("stackoverflow.com", Type.NS);
SimpleResolver resolver=new SimpleResolver("8.8.8.8");
lookup.setDefaultResolver(resolver);
lookup.setResolver(resolver);
Record [] records = lookup.run();
for (int i = 0; i < records.length; i++) {
Record r = (Record ) records[i];
System.out.println(r.getName()+","+r.getAdditionalName());
}
}
catch ( Exception ex) {
ex.printStackTrace();
logger.error(ex.getMessage(),ex);
}
Results:
stackoverflow.com.,ns-1033.awsdns-01.org.
stackoverflow.com.,ns-cloud-e1.googledomains.com.
stackoverflow.com.,ns-cloud-e2.googledomains.com.
stackoverflow.com.,ns-358.awsdns-44.com.
You don’t need a DNS library just to look up an IP address. You can simply use JNDI:
Properties env = new Properties();
env.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.dns.DnsContextFactory");
env.setProperty(Context.PROVIDER_URL, "dns://8.8.8.8");
DirContext context = new InitialDirContext(env);
Attributes list = context.getAttributes("stackoverflow.com",
new String[] { "A" });
NamingEnumeration<? extends Attribute> records = list.getAll();
while (records.hasMore()) {
Attribute record = records.next();
String name = record.get().toString();
System.out.println(name);
}
If you insist on using the dnsjava library, you need to use Type.A (as your code was originally doing, before your edit).
Looking at the documentation for the Record class, notice the long list under Direct Known Subclasses. You need to cast each Record to the appropriate subclass, which in this case is ARecord.
Once you’ve done that cast, you have an additional method available, getAddress:
for (int i = 0; i < records.length; i++) {
ARecord r = (ARecord) records[i];
System.out.println(r.getName() + "," + r.getAdditionalName()
+ " => " + r.getAddress());
}
I am trying to update my database according to the data of "replace". replace have few columns of data and I want to update those columns in my database abcd accordingly.
But when I run this code only the last column that means the last data gets updated in DB and I guess the iteration is not right in this case.
So please help me. I appreciate your suggestions.
private static void updateDB(HashMap<String, HashMap<Integer, String>> map) throws ConfigException, SQLException {
MConfig config = ScriptsTools.init();
ConnPool select = ScriptsTools.openPool("database", config);
Connection write = select.getWrite();
StringBuilder replace = new StringBuilder();
replace.append("REPLACE INTO abcd (a,b,c,d,e) values ");
Iterator<Entry<String, HashMap<Integer, String>>> it = map.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, HashMap<Integer, String>> pair = it.next();
replace.append("('");
replace.append(pair.getKey());
replace.append("','");
replace.append(pair.getValue().get(2));
replace.append("','");
replace.append(pair.getValue().get(11));
replace.append("',");
replace.append("now()");
replace.append(",");
replace.append("now()");
replace.append("), ");
}
replace.delete(replace.length()-2, replace.length());
//System.out.println("Query : "+replace.toString());
String words = replace.toString();
System.out.println(words);
Statement st = write.createStatement();
st.executeUpdate(replace.toString());
}
Because you use a Statement instead of a PreparedStatement is easy to print the query to be executed on the database using the following command
System.out.println(replace.toString());
Take it and execute it in your preferred database client and check if it works as expected.
I have a requirement of reading User Information from 2 different sources (db) per userId and storing consolidated information in a Map with key as userId. Users in numbers can vary based on period they have opted for. Group of users may belong to different Period of Year.eg daily, weekly, monthly users.
I used HashMap and LinkedHashMap to get this done. As it slows down the process and to make it faster, I thought of using Threading here.
Reading some tutorials and examples now I am using ConcurrentHashMap and ExecutorService.
In cases based on some validation I want to skip the current iteration and move to next User info. It doesnot allow to use continue keyword to use within for loop. Is there any way to achieve same differently within Multithreaded code.
Moreover below code piece though it works, but its not significantly that faster than the code without threading which creates doubt if Executor Service is implemented correctly.
How do we debug in case we get any error in Multithreaded code. Execution holds at debug point but its not consistent and it does not move to next line with F6.
Can someone point out if I am missing something in the code. Or any other example of simillar use case also can be of great help.
public void getMap() throws UserException
{
long startTime = System.currentTimeMillis();
Map<String, Map<Integer, User>> map = new ConcurrentHashMap<String, Map<Integer, User>>();
//final String key = "";
try
{
final Date todayDate = new Date();
List<String> applyPeriod = db.getPeriods(todayDate);
for (String period : applyPeriod)
{
try
{
final String key = period;
List<UserTable1> eligibleUsers = db.findAllUsers(key);
Map<Integer, User> userIdMap = new ConcurrentHashMap<Integer, User>();
ExecutorService executor = Executors.newFixedThreadPool(eligibleUsers.size());
CompletionService<User> cs = new ExecutorCompletionService<User>(executor);
int userCount=0;
for (UserTable1 eligibleUser : eligibleUsers)
{
try
{
cs.submit(
new Callable<User>()
{
public User call()
{
int userId = eligibleUser.getUserId();
List<EmployeeTable2> empData = db.findByUserId(userId);
EmployeeTable2 emp = null;
if (null != empData && !empData.isEmpty())
{
emp = empData.get(0);
}else{
String errorMsg = "No record found for given User ID in emp table";
logger.error(errorMsg);
//continue;
// conitnue does not work here.
}
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}
}
);
userCount++;
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
for (int i = 0; i < userCount ; i++ ) {
try {
User user = cs.take().get();
if (user != null) {
userIdMap.put(user.getUserId(), user);
}
} catch (ExecutionException e) {
} catch (InterruptedException e) {
}
}
executor.shutdown();
map.put(key, userIdMap);
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
}
catch(Exception ex){
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
logger.info("Size of Map : " + map.size());
Set<String> periods = map.keySet();
logger.info("Size of periods : " + periods.size());
for(String period :periods)
{
Map<Integer, User> mapOfuserIds = map.get(period);
Set<Integer> userIds = mapOfuserIds.keySet();
logger.info("Size of Set : " + userIds.size());
for(Integer userId : userIds){
User inf = mapOfuserIds.get(userId);
logger.info("User Id : " + inf.getUserId());
}
}
long endTime = System.currentTimeMillis();
long timeTaken = (endTime - startTime);
logger.info("All threads are completed in " + timeTaken + " milisecond");
logger.info("******END******");
}
You really don't want to create a thread pool with as many threads as users you've read from the db. That doesn't make sense most of the time because you need to keep in mind that threads need to run somewhere... There are not many servers out there with 10 or 100 or even 1000 cores reserved for your application. A much smaller value like maybe 5 is often enough, depending on your environment.
And as always for topics about performance: You first need to test what your actual bottleneck is. Your application may simply don't benefit of threading because e.g. you are reading form a db which only allows 5 concurrent connections a the same time. In that case all your other 995 threads will simply wait.
Some other thing to consider is network latency: Reading multiple user ids from multiple threads may even increase the round trip time needed to get the data for one user from the database. An alternative approach might be to not read one user at a time, but the data of all 10'000 of them at once. That way your maybe available 10 GBit Ethernet connection to your database might really speed things up because you have only small communication overhead with the database but it might serve you all data you need in one answer quickly.
So in short, from my opinion your question is about performance optimization of your problem in general, but you don't know enough yet to decide which way to go.
you could try something like that:
List<String> periods = db.getPeriods(todayDate);
Map<String, Map<Integer, User>> hm = new HashMap<>();
periods.parallelStream().forEach(s -> {
eligibleUsers = // getEligibleUsers();
hm.put(s, eligibleUsers.parallelStream().collect(
Collectors.toMap(UserTable1::getId,createUserForId(UserTable1:getId))
});
); //
And in the createUserForId you do your db-reading
private User createUserForId(Integer id){
db.findByUserId(id);
//...
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}
I'm running a sample java program to query a dynamodb table, the table has about 90000 items but when i get the scan count from java it shows only 1994 items
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = client.scan(scanRequest);
System.out.println("#items:" + result.getScannedCount());
the program outputs #items:1994
but the detail from amazon aws console shows:
Item Count*: 89249
any idea?
thanks
scanning or querying dynamodb only returns maximum of 1MB of data.
the count is the number of return items fit in 1MB. in order to get the whole table, you should aggressively scan the database until the value LastEvaluatedKey is null
Set your book object with correct hash key value, and use DynamoDBMapper to get the count.
DynamoDBQueryExpression<Book> queryExpression = new DynamoDBQueryExpression<Book>()
.withHashKeyValues(book);
dynamoDbMapper.count(Book.class, queryExpression);
This should help . Worked for me
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion("your region").build();
DynamoDB dynamoDB = new DynamoDB(client);
TableDescription tableDescription = dynamoDB.getTable("table name").describe();
tableDescription.getItemCount();
Based on answer from nightograph
private ArrayList<String> fetchItems() {
ArrayList<String> ids = new ArrayList<>();
ScanResult result = null;
do {
ScanRequest req = new ScanRequest();
req.setTableName("table_name");
if (result != null) {
req.setExclusiveStartKey(result.getLastEvaluatedKey());
}
result = amazonDynamoDBClient.scan(req);
List<Map<String, AttributeValue>> rows = result.getItems();
for (Map<String, AttributeValue> map : rows) {
AttributeValue v = map.get("rangeKey");
String id = v.getS();
ids.add(id);
}
} while (result.getLastEvaluatedKey() != null);
System.out.println("Result size: " + ids.size());
return ids;
}
I agreed with nightograph. I thinks this link is useful.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html
I just tested with this example. Anyway this is the Dyanamodb v2.
final ScanRequest scanRequest = new ScanRequest()
.withTableName("table_name");
final ScanResult result = dynamoDB.scan(scanRequest);
return result.getCount();