SingleColumnValueFilter has no impact on result - java

hy,
this question is pretty similar to SingleColumnValueFilter not returning proper number of rows .
I use four SingleColumnValueFilter's w/ operator EQUAL and add them to a FilterList with Operator MUST_PASS_ONE. the number of results is the same as w/o setting the FilterList. The value to compare is a byte[] that should be correct as I just store the values from previous results. (it is an IP address that I convert to InetAddress, new InetAddress(value as byte[]), when retrieving the data, and for the query described I just call InetAddress.getAddress which returns a byte[])
Do you have any ideas what might be the problem? Am I using the Filter wrong?
EDIT:
I also used the original values retrieved by the query as value for SingleColumnValueFilter, and there was no difference in the results, thus the byte[] contents can't be the problem.

I think I can give the answer myself, sorry for not debugging and checking all the hbase code before.
I just checked the implementation of the compare algorithm (which is lexicographically), and thus i realized that the length is not taken into account, though I thought it would be filled up w/ zero's; unfortunately it is not.
The only reasonable option would be to create a custom comparator (eg see How do you use a custom comparator with SingleColumnValueFilter on HBase?)

Related

MongoDB (Java): efficient update of multiple documents to different(!) values

I have a MongoDB database and the program I'm writing is meant to change the values of a single field for all documents in a collection. Now if I want them all to change to a single value, like the string value "mask", then I know that updateMany does the trick and it's quite efficient.
However, what I want is an efficient solution for updating to different new values, in fact I want to pick the new value for the field in question for each document from a list, e.g. an ArrayList. But then something like this
collection.updateMany(new BasicDBObject(),
new BasicDBObject("$set",new BasicDBObject(fieldName,
listOfMasks.get(random.nextInt(size)))));
wouldn't work since updateMany doesn't recompute the value that the field should be set to, it just computes what the argument
listOfMasks.get(random.nextInt(size))
would be once and then it uses that for all the documents. So I don't think there's a solution to this problem that can actually employ updateMany since it's simply not versatile enough.
But I was wondering if anyone has any ideas for at least making it faster than simply iterating through all the documents and each time do updateOne where it updates to a new value from the ArrayList (in a random order but that's just a detail), like below?
// Loop until the MongoCursor is empty (until the search is complete)
try {
while (cursor.hasNext()) {
// Pick a random mask
String mask = listOfMasks.get(random.nextInt(size));
// Update this document
collection.updateOne(cursor.next(), Updates.set("test_field", mask));
}
} finally {
cursor.close();
}```
MongoDB provides the bulk write API to batch updates. This would be appropriate for your example of setting the value of a field to a random value (determined on the client) for each document.
Alternatively if there is a pattern to the changes needed you could potentially use find and modify operation with the available update operators.

"Escaped hexadecimal" to boolean

I'm working with HBase on a project and running into a seemingly simple situation that is throwing me for a loop. Hbase can store table values as escaped hexadecimal. In my case, I have true/false being stored as \x00 and \xFF, respectively.
The problem is (besides being unfamiliar with Java) I need to find a way to convert these to bool, or at least to compare them in a like-bool situation. They will never be anything other than \x00 and \xFF.
Is there not an elegant way to do this?
Please help, I'm really stuck.
Edit: This is probably relevant Hbase shell - how to write byte value
I suspect you could do something like... Hex ->binary->boolean.
But there might even be a toBoolean method already.
Or you could override the compare method they're using. But this could yield undesirable effects.
Can you post the API for the class you're using?
Ok, apparently there is a Bytes.toBoolean() function.

RecyclerView Adapter Map instead of Arraylist [duplicate]

Edit: Figured it out, check my posted answer if you're having similar issues.
I know there are several questions about this issue, but none of their solutions are working for me.
In my model class I have made sure to use List instead of Arraylist to avoid Firebase issues, but am still getting this error. It's a lot of code but most questions ask for all the code so I'll post it all.
TemplateModelClass.java
//
I've used this basic model successfully many times. For the
HashMaps<String, List<String>>,
the String is an incremented Integer converted to String. The List's are just Strings in a List. Here's some sample JSON from Firebase:
//
Formatted that as best as I could. If you need a picture of it let me know and I'll get a screenshot
And am getting this error, as stated in the title:
com.google.firebase.database.DatabaseException: Expected a Map while deserializing, but got a class java.util.ArrayList
The most upvoted question about this seems to have something to do with a problem using an integer as a key, but I think I've avoided that by always using an integer converted to a string. It may be interpreting it strangely, so I'll try some more stuff in the meantime. Thanks for reading!
Alright, figured it out. If anyone reading this has this problem and are using incremented ints/longs/whatever that get converted to strings, you must add some characters to the converted int. Firebase apparently converts these keys back into non-Strings if it can be converted.
For example, if you do something like this:
int inc = 0;
inc++; // 1
map.put(String.valueOf(inc), someList);
Firebase interprets that key as 1 instead of "1".
So, to force Fb to intepret as a string, do something like this:
int inc = 0;
inc++; // 1
map.put(String.valueOf(inc) + "_key", someList);
And everything works out perfectly. Obviously if you also need to read those Strings back to ints, just split the string with "[_]" and you're good to go.
The main issue is that you are using a List instead of a Map. As your error said, while deserializing it is expectig a Map but is found an ArrayList.
So in order to solve this problem youd need to change all the lists in your model with maps like this:
private Map<String, Object> mMapOne;
After changing all those fileds like this, you need also to change your public setters and getters.
Hope it helps.

Java & MySQL: Store an Read a 365 position of bitarray. HOW?

I am currently working with Java and MySQL, and I found an issue I don't know how to solve.
I have a class that stores a String of 365 positions that represents a Binary String "010111010010100...", and I would like to be able to store and read that field from the database.
Once it is read, I will perform an AND Logic operation with another bitarray.
I read about the BitSet class, that allows the logical operators (AND, OR, XOR, ...) between them. I tried it, but I didn't like the solutions I got. I could also try to transform the String to a byte array, and then store and read it from the database, in order to later perform the logic AND operation, but not sure if I would need to always create a BitSet, and how performant could it be.
I don't know which is the most performant way to do what I want:
Convert the Binary String in another element.
Store that element in the database (in the case of BitSet I tried to define the Database field as BLOB, but I had a lot of issues transforming the BitSet to BLOB and reading the BLOB to a BitSet).
Read the element from the database (at this point would be great to directly work with the element without making any cast or transformation).
Perform a logic AND with another bitarray and get the result.
I have tried a lot of options, but they didn't work.
Could someone help me with this problem and how to better approach it from the performance point of view?
Thanks!
Storing bit in a string is bit weird, I used long to store a number, and make bitwise operations on that. It won't work for you, since you use much more bits. If it can remain string, maybe you can write a short function to make the AND operator on each byte of the string, somehow like this:
for (int i = 0; i<366; i++) {
data .= (stringname[i] == binarystring[i]?"1":"0");
}
Go through your string, while checking if it equals binary string (The one you want to AND it), if they equal, concat 1, if not, concat 0;

How to parse large files using flatpack

I need to parse files that may be quite large, possibly 100s of megabytes and millions of lines. I have been trying to do this using FlatPack. I would think the way to do this would be to use the buffered parsers and the new stream methods. But, despite that dataset.next() returns true for the correct number of records, the Optional returned by dataset.getRecord() never contains a value.
I have looked at this example/test but it only counts the number of record and does not actually do anything with the content.
example/test
You can use the class BuffReaderParseFactory instead of DefaultParserFactory.
It will read one record from the input file only when you call "next()".
The explanations for both DefaultParserFactory and BuffReaderParseFactory are not exactly helpful. Both libraries said to return PZParser (from newDelimitedParser) but only one of them returns an actual value from a record. Based on the examples I've seen, I think BuffReaderParseFactory is just for checking performance (hence should be faster) and DefaultParserFactory on the other hand contains all the records.

Categories

Resources