Smallest possible value to add to array? - java

From what I've read, you can store bytes in an array in PHP with this sort of command:
$array = [1,2,14,10];
I have four basic flags I want to add to each array value, something like 0000. If the user performed an action that unlocked the 3rd flag, the value should look like 0010. If all flags are set, the value would look like 1111.
I plan on having a lot of these types of array values, so I was wondering what the smallest possible value I could put into an array that's also Java friendly? After the data is stored in PHP, I'll need to get the array in Java and be able to retrieve these flags. That might look something like:
somevar array = array_from_php;
if(array[0][flag3] == 1)//Player has unlocked this flag
/*do something */
Any advice is greatly appreciated.

I think you dont want an array but an byte (8 bit)
or a word (16 bit) or a dword (32 bit) to store
your flags in RAM or persistent in DB or textfile.
While PHP is not a type save language you cannot declare those types as far as I know.
But you inspired me. The PHP's error_reporting value is stored like this.
But I think it is a full integer instead of just a byte, word or dword.
I did a little test and it seems to work:
<?php
// Flag definitions
$defs = array(
"opt1" => 1,
"opt2" => 2,
"opt3" => 4,
"opt4" => 8,
"opt5" => 16,
"opt6" => 32
);
// enable flag 1,3 and 4 by using a bitwise "OR" Operator
$test = $defs["opt1"] | $defs["opt3"] | $defs["opt4"];
displayFlags($test, $defs);
// enable flag 6, too
$test |= $defs["opt6"];
displayFlags($test, $defs);
// disable flag 3
$test &= ~$defs["opt3"];
displayFlags($test, $defs);
// little improvement: the enableFlag/disableFlag functions
enableFlag($test, $defs["opt5"]);
displayFlags($test, $defs);
disableFlag($test, $defs["opt5"]);
displayFlags($test, $defs);
function displayFlags($storage, $defs) {
echo "The current storage value is: ".$storage;
echo "<br />";
foreach($defs as $k => $v) {
$isset = (($storage & $v) === $v);
echo "Flag \"$k\" : ". (($isset)?"Yes":"No");
echo "<br />";
}
echo "<br />";
}
function enableFlag(&$storage, $def) {
$storage |= $def;
}
function disableFlag(&$storage, $def) {
$storage &= ~$def;
}
The output is:
The current storage value is: 13
Flag "opt1" : Yes
Flag "opt2" : No
Flag "opt3" : Yes
Flag "opt4" : Yes
Flag "opt5" : No
Flag "opt6" : No
The current storage value is: 45
Flag "opt1" : Yes
Flag "opt2" : No
Flag "opt3" : Yes
Flag "opt4" : Yes
Flag "opt5" : No
Flag "opt6" : Yes
The current storage value is: 41
Flag "opt1" : Yes
Flag "opt2" : No
Flag "opt3" : No
Flag "opt4" : Yes
Flag "opt5" : No
Flag "opt6" : Yes
The current storage value is: 57
Flag "opt1" : Yes
Flag "opt2" : No
Flag "opt3" : No
Flag "opt4" : Yes
Flag "opt5" : Yes
Flag "opt6" : Yes
The current storage value is: 41
Flag "opt1" : Yes
Flag "opt2" : No
Flag "opt3" : No
Flag "opt4" : Yes
Flag "opt5" : No
Flag "opt6" : Yes
Conclusion:
I think this is the most efficient way to store flags with a minimum of space. But if you store it like this in a database you may get problems with efficient queries on those flags. I dont think that it is possible to query one or more specific bits of an integer value. But maybe I am wrong and you can use bitwise operator in a query, too. However, I love this kind of saving data.

Java also has a byte[], which will be the smallest storage as well.
With that said, I believe you can find what you are looking for in this post: Store binary sequence in byte array?

Related

Can I perform a regex search on Redis values?

I tried using RedisSearch but there you can perform a fuzzy search, but I need to perform a regex search like:
key: "12345"
value: { name: "Maruti"}
searching "aru" will give the result "Mumbai", basically the regex formed is *aru*. Can anyone help me out how can I achieve it using Redis ?
This can be done, but I do not recommend it - performance will be greatly impacted.
If you must, however, you can use RedisGears for ad-hoc regex queries like so:
127.0.0.1:6379> HSET mykey name Maruti
(integer) 1
127.0.0.1:6379> HSET anotherkey name Moana
(integer) 1
127.0.0.1:6379> RG.PYEXECUTE "import re\np = re.compile('.*aru.*')\nGearsBuilder().filter(lambda x: p.match(x['value']['name'])).map(lambda x: x['key']).run()"
1) 1) "mykey"
2) (empty array)
Here's the Python code for readability:
import re
p = re.compile('.*aru.*')
GearsBuilder() \
.filter(lambda x: p.match(x['value']['name'])) \
.map(lambda x: x['key']) \
.run()

Better Performance hget vs get || Using Redis

Better Performance hget vs get || Using Redis
1>
hset key field value ---Here field("dept") will always be same(constant) and key could be 20 char
hset "user1" "dept" 1
hset "user2" "dept" 2
hset "user3" "dept" 2
2>
set key value --Here key could be 20 char
set "{user1}dept" 1
set "{user2}dept" 2
set "{user3}dept" 3
Q1.
In both cases which get cmd will run faster (considering our database has millions of key value pair)
hget "user2" "dept" vs get "user2" "dept"
Q2. is hset "user1" "dept" 1 is equivalent to {"user1" : {"dept" : 1}} or {"dept" : {"user1" : 1}}
Q3. I want to implement expiry on key and field which is not possible in case of hset is there any alternative?

Validating user input UUID [duplicate]

How to check if variable contains valid UUID/GUID identifier?
I'm currently interested only in validating types 1 and 4, but it should not be a limitation to your answers.
Currently, UUID's are as specified in RFC4122. An often neglected edge case is the NIL UUID, noted here. The following regex takes this into account and will return a match for a NIL UUID. See below for a UUID which only accepts non-NIL UUIDs. Both of these solutions are for versions 1 to 5 (see the first character of the third block).
Therefore to validate a UUID...
/^[0-9a-f]{8}-[0-9a-f]{4}-[0-5][0-9a-f]{3}-[089ab][0-9a-f]{3}-[0-9a-f]{12}$/i
...ensures you have a canonically formatted UUID that is Version 1 through 5 and is the appropriate Variant as per RFC4122.
NOTE: Braces { and } are not canonical. They are an artifact of some systems and usages.
Easy to modify the above regex to meet the requirements of the original question.
HINT: regex group/captures
To avoid matching NIL UUID:
/^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i
If you want to check or validate a specific UUID version, here are the corresponding regexes.
Note that the only difference is the version number, which is explained in 4.1.3. Version chapter of UUID 4122 RFC.
The version number is the first character of the third group : [VERSION_NUMBER][0-9A-F]{3} :
UUID v1 :
/^[0-9A-F]{8}-[0-9A-F]{4}-[1][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
UUID v2 :
/^[0-9A-F]{8}-[0-9A-F]{4}-[2][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
UUID v3 :
/^[0-9A-F]{8}-[0-9A-F]{4}-[3][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
UUID v4 :
/^[0-9A-F]{8}-[0-9A-F]{4}-[4][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
UUID v5 :
/^[0-9A-F]{8}-[0-9A-F]{4}-[5][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
regex to the rescue
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/.test('01234567-9ABC-DEF0-1234-56789ABCDEF0');
or with brackets
/^\{?[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}‌​\}?$/
If you are using Node.js for development, it is recommended to use a package called Validator. It includes all the regexes required to validate different versions of UUID's plus you get various other functions for validation.
Here is the npm link: Validator
var a = 'd3aa88e2-c754-41e0-8ba6-4198a34aa0a2'
v.isUUID(a)
true
v.isUUID('abc')
false
v.isNull(a)
false
If you use the uuid package, this package brings a boolean validation function where it tells you if a uuid is valid or not.
Example:
import { validate as isValidUUID } from 'uuid';
if (!isValidUUID(tx.originId)) {
return Promise.reject('Invalid OriginID');
}
thanks to #usertatha with some modification
function isUUID ( uuid ) {
let s = "" + uuid;
s = s.match('^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$');
if (s === null) {
return false;
}
return true;
}
Beside Gambol's answer that will do the job in nearly all cases, all answers given so far missed that the grouped formatting (8-4-4-4-12) is not mandatory to encode GUIDs in text. It's used extremely often but obviously also a plain chain of 32 hexadecimal digits can be valid.[1] regexenh:
/^[0-9a-f]{8}-?[0-9a-f]{4}-?[1-5][0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$/i
[1] The question is about checking variables, so we should include the user-unfriendly form as well.
Why are there dashes in a .NET GUID? - Stack Overflow plus Accepted answer
Test and validate a GUID (guid.us)
Guid.ToString Method (String) (MSDN)
All type-specific regexes posted so far are failing on the "type 0" Nil UUID, defined in 4.1.7 of the RFC as:
The nil UUID is special form of UUID that is specified to have all 128 bits set to zero: 00000000-0000-0000-0000-000000000000
To modify Wolf's answer:
/^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-5][0-9a-f]{3}-?[089ab][0-9a-f]{3}-?[0-9a-f]{12}$/i
Or, to properly exclude a "type 0" without all zeros, we have the following (thanks to Luke):
/^(?:[0-9a-f]{8}-?[0-9a-f]{4}-?[1-5][0-9a-f]{3}-?[89ab][0-9a‌​-f]{3}-?[0-9a-f]{12}‌​|00000000-0000-0000-‌​0000-000000000000)$/‌​i
if you use the uuid package, you can import the validate and pass the id into it
const { v4: uuidv4, validate } = require('uuid');
const { id } = request.params;
validate(id) ? true : false;
I think Gambol's answer is almost perfect, but it misinterprets the RFC 4122 § 4.1.1. Variant section a bit.
It covers Variant-1 UUIDs (10xx = 8..b), but does not cover Variant-0 (0xxx = 0..7) and Variant-2 (110x = c..d) variants which are reserved for backward compatibility, so they are technically valid UUIDs. Variant-4 (111x = e..f) is indeed reserved for future use, so they are not valid currently.
Also, 0 type is not valid, that "digit" is only allowed to be 0 if it's a NIL UUID (like mentioned in Evan's answer).
So I think the most accurate regex that complies with current RFC 4122 specification is (including hyphens):
/^([0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[0-9a-d][0-9a-f]{3}-[0-9a-f]{12}|00000000-0000-0000-0000-000000000000)$/i
^ ^^^^^^
(0 type is not valid) (only e..f variant digit is invalid currently)
A slightly modified version of the above answers written in a more concise way. This will validate any GUID with hyphens (however easily modified to make hyphens optional). This will also support upper and lower case characters which has become the convention regardless of the specification:
/^([0-9a-fA-F]{8})-(([0-9a-fA-F]{4}\-){3})([0-9a-fA-F]{12})$/i
The key here is the repeating part below
(([0-9a-fA-F]{4}\-){3})
Which simply repeats the 4 char patterns 3 times
If someone is using yup , JavaScript schema validator library, This functionality can be achieved with below code.
const schema = yup.object().shape({
uuid: yup.string().uuid()
});
const isValid = schema.isValidSync({uuid:"string"});
Use the .match() method to check whether String is UUID.
public boolean isUUID(String s){
return s.match("^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$");
}
A good way to do it in Node is to use the ajv package (https://github.com/epoberezkin/ajv).
const Ajv = require('ajv');
const ajv = new Ajv({ allErrors: true, useDefaults: true, verbose: true });
const uuidSchema = { type: 'string', format: 'uuid' };
ajv.validate(uuidSchema, 'bogus'); // returns false
ajv.validate(uuidSchema, 'd42a8273-a4fe-4eb2-b4ee-c1fc57eb9865'); // returns true with v4 GUID
ajv.validate(uuidSchema, '892717ce-3bd8-11ea-b77f-2e728ce88125'); // returns true with a v1 GUID
Versions 1 to 5, without using a multi-version regex when version is omitted.
const uuid_patterns = {
1: /^[0-9A-F]{8}-[0-9A-F]{4}-1[0-9A-F]{3}-[0-9A-F]{4}-[0-9A-F]{12}$/i,
2: /^[0-9A-F]{8}-[0-9A-F]{4}-2[0-9A-F]{3}-[0-9A-F]{4}-[0-9A-F]{12}$/i,
3: /^[0-9A-F]{8}-[0-9A-F]{4}-3[0-9A-F]{3}-[0-9A-F]{4}-[0-9A-F]{12}$/i,
4: /^[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i,
5: /^[0-9A-F]{8}-[0-9A-F]{4}-5[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i
};
const isUUID = (input, version) => {
if(typeof input === "string"){
if(Object.keys(uuid_patterns).includes(typeof version === "string" ? version : String(version))){
return uuid_patterns[version].test(input);
} else {
return Object.values(uuid_patterns).some(pattern => pattern.test(input));
}
}
return false;
}
// Testing
let valid = [
'A987FBC9-4BED-3078-CF07-9141BA07C9F3',
'A987FBC9-4BED-4078-8F07-9141BA07C9F3',
'A987FBC9-4BED-5078-AF07-9141BA07C9F3',
];
let invalid = [
'',
'xxxA987FBC9-4BED-3078-CF07-9141BA07C9F3',
'A987FBC9-4BED-3078-CF07-9141BA07C9F3xxx',
'A987FBC94BED3078CF079141BA07C9F3',
'934859',
'987FBC9-4BED-3078-CF07A-9141BA07C9F3',
'AAAAAAAA-1111-1111-AAAG-111111111111',
];
valid.forEach(test => console.log("Valid case, result: "+isUUID(test)));
invalid.forEach(test => console.log("Invalid case, result: "+isUUID(test)));
I added a UUID validator to Apache Commons Validator. It's not yet been merged, but you can vote for it here:
https://github.com/apache/commons-validator/pull/68
I have this function, but it's essentially the same as the accepted answer.
export default function isUuid(uuid: string, isNullable: boolean = false): boolean {
return isNullable
? /^[0-9a-f]{8}-[0-9a-f]{4}-[0-5][0-9a-f]{3}-[089ab][0-9a-f]{3}-[0-9a-f]{12}$/i.test(uuid)
: /^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i.test(uuid);
}
I think a better way is using the static method fromString to avoid those regular expressions.
id = UUID.randomUUID();
UUID uuid = UUID.fromString(id.toString());
Assert.assertEquals(id.toString(), uuid.toString());
On the other hand
UUID uuidFalse = UUID.fromString("x");
throws java.lang.IllegalArgumentException: Invalid UUID string: x

MongoDB TextCriteria split on specific characters [duplicate]

Example:
> db.stuff.save({"foo":"bar"});
> db.stuff.find({"foo":"bar"}).count();
1
> db.stuff.find({"foo":"BAR"}).count();
0
You could use a regex.
In your example that would be:
db.stuff.find( { foo: /^bar$/i } );
I must say, though, maybe you could just downcase (or upcase) the value on the way in rather than incurring the extra cost every time you find it. Obviously this wont work for people's names and such, but maybe use-cases like tags.
UPDATE:
The original answer is now obsolete. Mongodb now supports advanced full text searching, with many features.
ORIGINAL ANSWER:
It should be noted that searching with regex's case insensitive /i means that mongodb cannot search by index, so queries against large datasets can take a long time.
Even with small datasets, it's not very efficient. You take a far bigger cpu hit than your query warrants, which could become an issue if you are trying to achieve scale.
As an alternative, you can store an uppercase copy and search against that. For instance, I have a User table that has a username which is mixed case, but the id is an uppercase copy of the username. This ensures case-sensitive duplication is impossible (having both "Foo" and "foo" will not be allowed), and I can search by id = username.toUpperCase() to get a case-insensitive search for username.
If your field is large, such as a message body, duplicating data is probably not a good option. I believe using an extraneous indexer like Apache Lucene is the best option in that case.
Starting with MongoDB 3.4, the recommended way to perform fast case-insensitive searches is to use a Case Insensitive Index.
I personally emailed one of the founders to please get this working, and he made it happen! It was an issue on JIRA since 2009, and many have requested the feature. Here's how it works:
A case-insensitive index is made by specifying a collation with a strength of either 1 or 2. You can create a case-insensitive index like this:
db.cities.createIndex(
{ city: 1 },
{
collation: {
locale: 'en',
strength: 2
}
}
);
You can also specify a default collation per collection when you create them:
db.createCollection('cities', { collation: { locale: 'en', strength: 2 } } );
In either case, in order to use the case-insensitive index, you need to specify the same collation in the find operation that was used when creating the index or the collection:
db.cities.find(
{ city: 'new york' }
).collation(
{ locale: 'en', strength: 2 }
);
This will return "New York", "new york", "New york" etc.
Other notes
The answers suggesting to use full-text search are wrong in this case (and potentially dangerous). The question was about making a case-insensitive query, e.g. username: 'bill' matching BILL or Bill, not a full-text search query, which would also match stemmed words of bill, such as Bills, billed etc.
The answers suggesting to use regular expressions are slow, because even with indexes, the documentation states:
"Case insensitive regular expression queries generally cannot use indexes effectively. The $regex implementation is not collation-aware and is unable to utilize case-insensitive indexes."
$regex answers also run the risk of user input injection.
If you need to create the regexp from a variable, this is a much better way to do it: https://stackoverflow.com/a/10728069/309514
You can then do something like:
var string = "SomeStringToFind";
var regex = new RegExp(["^", string, "$"].join(""), "i");
// Creates a regex of: /^SomeStringToFind$/i
db.stuff.find( { foo: regex } );
This has the benefit be being more programmatic or you can get a performance boost by compiling it ahead of time if you're reusing it a lot.
Keep in mind that the previous example:
db.stuff.find( { foo: /bar/i } );
will cause every entries containing bar to match the query ( bar1, barxyz, openbar ), it could be very dangerous for a username search on a auth function ...
You may need to make it match only the search term by using the appropriate regexp syntax as:
db.stuff.find( { foo: /^bar$/i } );
See http://www.regular-expressions.info/ for syntax help on regular expressions
db.company_profile.find({ "companyName" : { "$regex" : "Nilesh" , "$options" : "i"}});
db.zipcodes.find({city : "NEW YORK"}); // Case-sensitive
db.zipcodes.find({city : /NEW york/i}); // Note the 'i' flag for case-insensitivity
TL;DR
Correct way to do this in mongo
Do not Use RegExp
Go natural And use mongodb's inbuilt indexing , search
Step 1 :
db.articles.insert(
[
{ _id: 1, subject: "coffee", author: "xyz", views: 50 },
{ _id: 2, subject: "Coffee Shopping", author: "efg", views: 5 },
{ _id: 3, subject: "Baking a cake", author: "abc", views: 90 },
{ _id: 4, subject: "baking", author: "xyz", views: 100 },
{ _id: 5, subject: "Café Con Leche", author: "abc", views: 200 },
{ _id: 6, subject: "Сырники", author: "jkl", views: 80 },
{ _id: 7, subject: "coffee and cream", author: "efg", views: 10 },
{ _id: 8, subject: "Cafe con Leche", author: "xyz", views: 10 }
]
)
Step 2 :
Need to create index on whichever TEXT field you want to search , without indexing query will be extremely slow
db.articles.createIndex( { subject: "text" } )
step 3 :
db.articles.find( { $text: { $search: "coffee",$caseSensitive :true } } ) //FOR SENSITIVITY
db.articles.find( { $text: { $search: "coffee",$caseSensitive :false } } ) //FOR INSENSITIVITY
One very important thing to keep in mind when using a Regex based query - When you are doing this for a login system, escape every single character you are searching for, and don't forget the ^ and $ operators. Lodash has a nice function for this, should you be using it already:
db.stuff.find({$regex: new RegExp(_.escapeRegExp(bar), $options: 'i'})
Why? Imagine a user entering .* as his username. That would match all usernames, enabling a login by just guessing any user's password.
Suppose you want to search "column" in "Table" and you want case insensitive search. The best and efficient way is:
//create empty JSON Object
mycolumn = {};
//check if column has valid value
if(column) {
mycolumn.column = {$regex: new RegExp(column), $options: "i"};
}
Table.find(mycolumn);
It just adds your search value as RegEx and searches in with insensitive criteria set with "i" as option.
Mongo (current version 2.0.0) doesn't allow case-insensitive searches against indexed fields - see their documentation. For non-indexed fields, the regexes listed in the other answers should be fine.
For searching a variable and escaping it:
const escapeStringRegexp = require('escape-string-regexp')
const name = 'foo'
db.stuff.find({name: new RegExp('^' + escapeStringRegexp(name) + '$', 'i')})
Escaping the variable protects the query against attacks with '.*' or other regex.
escape-string-regexp
The best method is in your language of choice, when creating a model wrapper for your objects, have your save() method iterate through a set of fields that you will be searching on that are also indexed; those set of fields should have lowercase counterparts that are then used for searching.
Every time the object is saved again, the lowercase properties are then checked and updated with any changes to the main properties. This will make it so you can search efficiently, but hide the extra work needed to update the lc fields each time.
The lower case fields could be a key:value object store or just the field name with a prefixed lc_. I use the second one to simplify querying (deep object querying can be confusing at times).
Note: you want to index the lc_ fields, not the main fields they are based off of.
Using Mongoose this worked for me:
var find = function(username, next){
User.find({'username': {$regex: new RegExp('^' + username, 'i')}}, function(err, res){
if(err) throw err;
next(null, res);
});
}
If you're using MongoDB Compass:
Go to the collection, in the filter type -> {Fieldname: /string/i}
For Node.js using Mongoose:
Model.find({FieldName: {$regex: "stringToSearch", $options: "i"}})
The aggregation framework was introduced in mongodb 2.2 . You can use the string operator "$strcasecmp" to make a case-insensitive comparison between strings. It's more recommended and easier than using regex.
Here's the official document on the aggregation command operator: https://docs.mongodb.com/manual/reference/operator/aggregation/strcasecmp/#exp._S_strcasecmp .
You can use Case Insensitive Indexes:
The following example creates a collection with no default collation, then adds an index on the name field with a case insensitive collation. International Components for Unicode
/* strength: CollationStrength.Secondary
* Secondary level of comparison. Collation performs comparisons up to secondary * differences, such as diacritics. That is, collation performs comparisons of
* base characters (primary differences) and diacritics (secondary differences). * Differences between base characters takes precedence over secondary
* differences.
*/
db.users.createIndex( { name: 1 }, collation: { locale: 'tr', strength: 2 } } )
To use the index, queries must specify the same collation.
db.users.insert( [ { name: "Oğuz" },
{ name: "oğuz" },
{ name: "OĞUZ" } ] )
// does not use index, finds one result
db.users.find( { name: "oğuz" } )
// uses the index, finds three results
db.users.find( { name: "oğuz" } ).collation( { locale: 'tr', strength: 2 } )
// does not use the index, finds three results (different strength)
db.users.find( { name: "oğuz" } ).collation( { locale: 'tr', strength: 1 } )
or you can create a collection with default collation:
db.createCollection("users", { collation: { locale: 'tr', strength: 2 } } )
db.users.createIndex( { name : 1 } ) // inherits the default collation
I'm surprised nobody has warned about the risk of regex injection by using /^bar$/i if bar is a password or an account id search. (I.e. bar => .*#myhackeddomain.com e.g., so here comes my bet: use \Q \E regex special chars! provided in PERL
db.stuff.find( { foo: /^\Qbar\E$/i } );
You should escape bar variable \ chars with \\ to avoid \E exploit again when e.g. bar = '\E.*#myhackeddomain.com\Q'
Another option is to use a regex escape char strategy like the one described here Javascript equivalent of Perl's \Q ... \E or quotemeta()
Use RegExp,
In case if any other options do not work for you, RegExp is a good option. It makes the string case insensitive.
var username = new RegExp("^" + "John" + "$", "i");;
use username in queries, and then its done.
I hope it will work for you too. All the Best.
If there are some special characters in the query, regex simple will not work. You will need to escape those special characters.
The following helper function can help without installing any third-party library:
const escapeSpecialChars = (str) => {
return str.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, "\\$&");
}
And your query will be like this:
db.collection.find({ field: { $regex: escapeSpecialChars(query), $options: "i" }})
Hope it will help!
Using a filter works for me in C#.
string s = "searchTerm";
var filter = Builders<Model>.Filter.Where(p => p.Title.ToLower().Contains(s.ToLower()));
var listSorted = collection.Find(filter).ToList();
var list = collection.Find(filter).ToList();
It may even use the index because I believe the methods are called after the return happens but I haven't tested this out yet.
This also avoids a problem of
var filter = Builders<Model>.Filter.Eq(p => p.Title.ToLower(), s.ToLower());
that mongodb will think p.Title.ToLower() is a property and won't map properly.
I had faced a similar issue and this is what worked for me:
const flavorExists = await Flavors.findOne({
'flavor.name': { $regex: flavorName, $options: 'i' },
});
Yes it is possible
You can use the $expr like that:
$expr: {
$eq: [
{ $toLower: '$STRUNG_KEY' },
{ $toLower: 'VALUE' }
]
}
Please do not use the regex because it may make a lot of problems especially if you use a string coming from the end user.
I've created a simple Func for the case insensitive regex, which I use in my filter.
private Func<string, BsonRegularExpression> CaseInsensitiveCompare = (field) =>
BsonRegularExpression.Create(new Regex(field, RegexOptions.IgnoreCase));
Then you simply filter on a field as follows.
db.stuff.find({"foo": CaseInsensitiveCompare("bar")}).count();
These have been tested for string searches
{'_id': /.*CM.*/} ||find _id where _id contains ->CM
{'_id': /^CM/} ||find _id where _id starts ->CM
{'_id': /CM$/} ||find _id where _id ends ->CM
{'_id': /.*UcM075237.*/i} ||find _id where _id contains ->UcM075237, ignore upper/lower case
{'_id': /^UcM075237/i} ||find _id where _id starts ->UcM075237, ignore upper/lower case
{'_id': /UcM075237$/i} ||find _id where _id ends ->UcM075237, ignore upper/lower case
For any one using Golang and wishes to have case sensitive full text search with mongodb and the mgo godoc globalsign library.
collation := &mgo.Collation{
Locale: "en",
Strength: 2,
}
err := collection.Find(query).Collation(collation)
As you can see in mongo docs - since version 3.2 $text index is case-insensitive by default: https://docs.mongodb.com/manual/core/index-text/#text-index-case-insensitivity
Create a text index and use $text operator in your query.

How to populate a Map from a file using 2 regex simultaneously?

I'm currently populating a List from a .audit file and extracting out two pieces of information into a Map. Here is what the file looks like:
type : REGISTRY_SETTING
description : "1.9.56 Network security: Do not store LAN Manager hash value on next password change: Enabled"
info : "This control defines whether the LAN Manager (LM) hash value for the new password is stored when the password is changed."
solution : "Make sure 'Do not store LAN Manager hash value on next password change' is Enabled."
reference : "PCI-DSS|8.4,800-53|AC-3,800-53|SC-5,800-53|CM-7,800-53|CM-6,CCE|CCE-8937-5"
see_also : "https://benchmarks.cisecurity.org/tools2/windows/CIS_Microsoft_Windows_7_Benchmark_v1.2.0.pdf"
value_type : POLICY_DWORD
reg_key : "HKLM\System\CurrentControlSet\Control\Lsa"
reg_item : "NoLMHash"
value_data : 1
type : REGISTRY_SETTING
description : "1.13.3 Notify antivirus programs when opening attachments: Enabled"
info : "This control defines whether antivirus program to be notified when opening attachments."
solution : "Make sure 'Notify antivirus programs when opening attachments' is Enabled."
reference : "800-53|SI-3,PCI-DSS|5.1.1,CCE|CCE-10076-8,PCI-DSS|5.1"
see_also : "https://benchmarks.cisecurity.org/tools2/windows/CIS_Microsoft_Windows_7_Benchmark_v1.2.0.pdf"
value_type : POLICY_DWORD
reg_key : "HKU\Software\Microsoft\Windows\CurrentVersion\Policies\Attachments"
reg_item : "ScanWithAntiVirus"
value_data : 3
reg_ignore_hku_users : "S-1-5-18,S-1-5-19,S-1-5-20"
I need the Map to be in a <description,value_data> format, regardless of whether anything comes after value_data Eg:
Key: "1.9.56 Network security: Do not store LAN Manager hash value on next password changed."
Value: 1
Here is my current code for populating the Map with it's key values:
String descriptionString = Pattern.quote("description") + "(.*?)" + Pattern.quote("info");
Pattern descriptionPattern = Pattern.compile(descriptionString);
Matcher descriptionMatcher = descriptionPattern.matcher(auditContentList.get(i));
while(descriptionMatcher.find())
{
System.out.println("Key found ");
customItemMap.put(descriptionMatcher.group(1),"");
}
Problem is I cant use two regexes simultaneously to populate the same index of the Map at any given time. Is there any better way to do this?
for your sample data, maybe you should try this :
description.*?"(.*?)"[\s\S]*?value_data.*?(\d+)
i've test it [here]

Categories

Resources