Java 16: Replace several letters - java

I have a somewhat unusual problem.I am currently trying to program a chat filter for discord in Java 16.
Here I ran into the problem that in German there are several ways to write a word to get around this filter.
As an example I now take the insult "Hurensohn".
Now you could simply write "Huränsohn" or "Hur3nsohn" in the chat and thus bypass the filter quite easily.
Since I don't want to manually pack every possibility into the filter, I thought about how I could do it automatically.So the first thing I did was to create a hashmap with all possible alternativeven letters, which looked something like this:
Map<String, List<String>> alternativeCharacters = new HashMap<>();
alternativeCharacters.put( "E", List.of( "ä", "3" ) );
I tried to change the corresponding letters in the words and add them to the chat filter, which actually worked.
But now we come to the problem:
To be able to cover all possible combinations, it doesn't do me much good to change only one type of letter in a word.
If we now take the word "Einschalter" and change the letter "e" here, we could also simply change the "e" here with a "3" or with an "ä", whereby then the following would come out:
3einschal3r
Einschalt3r
3inschalter
and
Äinschalär
Einschaltär
Äinschalter
But now I also want "mixed" words to be created. e.g. "3inschalär", where both the "Ä" and the "3" are used to create a word. Where then the following combinations would come out:
3inschalär
Äinschalt3r
Does anyone know how I can relaize something like that? With the normal replace() method I haven't found a way yet to create "mixed" replaces.
I hope people understand what kind of problem I have and what I want to do. :D
Current method used for replacing:
public static List<String> replace( String word, String from, String... to ) {
final int[] index = { 0 };
List<String> strings = new ArrayList<>();
/* Replaces all letters */
List.of( to ).forEach( value -> strings.add( word.replaceAll( from, value ) ) );
/* Here is the problem. Here only one letter is edited at a time and thus changed in the word */
List.of( to ).forEach( value -> {
List.of( word.split( "" ) ).forEach( letters -> {
if ( letters.equalsIgnoreCase( from ) ) {
strings.add( word.substring( 0, index[0] ) + value + "" + word.substring( index[0] + 1 ) );
}
index[0]++;
} );
index[0] = 0;
} );
return strings;
}

As said by others, you can’t keep up with the creativity of people. But if you want to continue using such a check, you should use the right tool for the job, i.e. a RuleBasedCollator.
RuleBasedCollator c = new RuleBasedCollator("<i,I=1=!<e=ä,E=3=Ä<o=0,O");
c.setStrength(Collator.PRIMARY);
String a = "3inschaltär", b = "Einschalter";
if(c.compare(a, b) == 0) {
System.out.println(a + " matches " + b);
}
3inschaltär matches Einschalter
This class even allows efficient hash lookups
// using c from above
// prepare map
var map = new HashMap<CollationKey, String>();
for(String s: List.of("Einschalter", "Hicks-Boson")) {
map.put(c.getCollationKey(s), s);
}
// use map for lookup
for(String s: List.of("Ä!nschalt3r", "H1cks-B0sOn")) {
System.out.println(s);
String match = map.get(c.getCollationKey(s));
if(match != null) System.out.println("\ta variant of " + match);
}
Ä!nschalt3r
a variant of Einschalter
H1cks-B0sOn
a variant of Hicks-Boson
While a Collator can be used for sorting, you’re only interested in identifying equals strings. Therefore, I didn’t care to specify a useful order, which simplifies the rules, as we only need to specify the characters supposed to be equal.
The linked documentation explains the syntax; in short, I=1=! defines the character I, 1, and ! as equal, whereas prepending i, defines i to be a different case of the other characters. Likewise, e=ä,E=3=Ä defines e equal to ä and both being different case than the characters E, 3, Ä. Eventually, the < separator defines characters to be different. It’s also defining a sorting order which, as said, we don’t care about in this usage.
As an addendum, the following can be used to remove accents and other marking from characters, except for umlauts, as you want to match German words. This would remove the requirement to deal with an exploding number of obfuscated character combinations, especially from people who know about Zalgo text converters:
String s = "òñę ảëîöū";
String n = Normalizer.normalize(s, Normalizer.Form.NFD)
.replaceAll("(?!(?<=[aou])\u0308)\\p{Mn}", "");
System.out.println(s + " -> " + n);
òñę ảëîöū -> one aeiöu

Off the top of my head, you may try to approach this using regular expressions, compiling patterns by replacing the respective letters where multiple ways of writing may occur in your dictionary.
E.g. in the direction of:
record LetterReplacements(String letter, List<String> replacements){}
public Predicate<String> generatePredicateForDictionaryWord(String word){
var letterA = new LetterReplacements("a", List.of("a", "A", "4"));
var writingStyles = letterA.replacements.stream()
.collect(Collectors.joining("|", "(", ")"));
var pattern = word.replaceAll(letterA.letter, writingStyles);
return Pattern.compile(pattern).asPredicate();
}
Example usage:
#ParameterizedTest
#CsvSource({
"maus,true",
"m4us,true",
"mAus,true",
"mous,false"
})
void testDictionaryPredicates(String word, boolean expectedResult) {
var predicate = underTest.generatePredicateForDictionaryWord("maus");
assertThat(predicate.test(word)).isEqualTo(expectedResult);
}
However I doubt that any approach in this direction would yield sufficient results in terms of performance, especially since I expect your dictionary to grow rather fast and the number of different writing "styles" to be rather large.
So please regard the snippet above only as explanation for the approach I was talking about. Again, I doubt you would yield sufficient performance, even if precompiling all patterns and the predicate combinations beforehand.

Related

Synchronizing a Cipher

I am creating an application which simply scrambles a string, and can put it back together. It follows a simple cipher. I am using code like this:
String oldstr = "Hello"
String newstr = old.replace("e", "l").replace("l", "t");
I only put a tiny bit because if I wrote out the entire thing, it would be huge.
On to the problem. The way the program works, it first replaces the "e" in "Hello" with an "l", turning the string into "Hlllo". Then it replaces the "l"s with "t"s. However, I don't want the "e" to eventually become an "t", since then I cant turn it back into the original. The way I want this application to work, the outcome would be, "Hltto". Is there a way that I can synchronize the application so that it will do this?
EDIT:
I am not looking for answers that only work for this scenario (in my actual application I have 26 characters being changed).
A cipher should be reversible.
That means, all operations must be reversible. A replace operation is not because it maps two different characters to one. (yours maps both 'e' and 'l' to 'l')
You'd have to build a table of what each character becomes, and change each character based on that table. For example:
a → ...
b → ...
c → ...
d → ...
e → l
...
l → t
...
Be sure that there are no duplicates on the right side.
Then iterate over each character in the string and build a new string.
StringBuilder sb = new StringBuilder(old.length());
for (char c : oldStr.toCharArray())
sb.append( replaceChar(c) );
newStr = db.toString().
The function replaceChar(c) can be as simple as a look-up table, or for more coding convenience, can be a 26-entry switch statement, or (as in real cryptography) a key-dependent mathematical function.
Also, be aware of what should happen between upper and lower case letters and what happens to other characters (like 'à', 'ÿ', '気', ...)
Consider processing a string char-by-char using a Map for replacements:
// replacement map
static final Map<Character, Character> REPLACEMENTS = new HashMap<>();
// fill up replacements
static {
for (String r : "el,lt,te,ab,ba".split(",")) // add all replacement pairs you need
REPLACEMENTS.put(r.charAt(0), r.charAt(1)); // e→l, l→t, t→e, a→b, ...
}
// encoding method
static public String encode(String input) {
StringBuilder sb = new StringBuilder();
for (char c : input.toCharArray())
sb.append(REPLACEMENTS.getOrDefault(c, c));
return sb.toString();
}
To check how it's working, call:
System.out.println(encode("Hello")); // Hltto
One way It can be accomplished is to have a Bitmap/bitstring/array of binary values. Call it whatever you want. So that each bit represent a letter. Initially before ciphering all bits are 0. Then when you change some letter you go to the bit array and flip corresponding bit. This way you know that it was changed so next pass will change only letters that correspond to 0 in bit array.
Or you can do all your ciphering in just one pass.
for (int i = 0; i < oldstring.length; i++) {
switch (oldstring.charAt(i)) {
.....
.
.
.
.....
default: do nothing.
}
}

Change part of code to functional paradigm

I have few lines of code, which I'm trying to convert to functional paradigm. The code is:
private String test(List<String> strings, String input) {
for (String s : strings) {
input = input.replace(s, ", ");
}
return input;
}
I need to make this one instruction chain. It must replace all strings from given list with coma IN given String input. I tried to do it with map method, but with no success. I'm aware I can do it if I appended input string into list at beginning and call map then, but the list is immutable, so I cannot do that.
I believe you can do this with a simple reduce:
strings.stream().reduce(input, (in, s) -> in.replace(s, ", "));
It takes the input, and replaces each occurence of the first string with ", ". Then it takes that result, and uses it as the input along with the next string, and repeats for every item in the list.
As Louis Wasserman points out, this approach cannot be used with parallelStream, so it won't work if you want parallelization.
The only think I can think of -- which is pretty awkward -- is
strings.stream()
.map(s -> (Function<String, String>) (x -> x.replace(s, ", ")))
.reduce(Function.identity(), Function::andThen)
.apply(input)
The following does pretty much the same thing.
private String test(List<String> strings, String input) {
return input.replaceAll(
strings
.stream()
.map(Pattern::quote)
.collect(Collectors.joining("|"))
, ", "
);
}
The main difference is that it first combines all the search strings into a single regex, and applies them all at once. Depending on size of your input strings, this may perform even better than your original use case.
If the list of strings is fixed, or changes only rarely, you can get some more speed from precompiling the joined pattern.

Select words with at least two different letters

I am using this code
Matcher m2 = Pattern.compile("\\b[ABE]+\\b").matcher(key);
to only get keys from a HashMap that contain the letters A, B or E
I am not though interested in words such as AAAAAA or EEEEE I need words with at least two different letters (in the best case, three).
Is there a way to modify the regex ? Can anyone offer insight on this?
Replace everything except your letters, make a Set of the result, test the Set for size.
public static void main (String args[])
{
String alphabet = "ABC";
String totest = "BBA";
if (args.length == 2)
{
alphabet = args[0];
totest = args[1];
}
String cleared = totest.replaceAll ("[^" + alphabet + "]", "");
char[] ca = cleared.toCharArray ();
Set <Character> unique = new HashSet <Character> ();
for (char c: ca)
unique.add (c);
System.out.println ("Result: " + (unique.size () > 1));
}
Example implementation
You could use a more complicated regex to do it e.g.
(.*A.*[BE].*|.*[BE].*A.*)|(.*B.*[AE].*|.*[AE].*B.*)|(.*E.*[BA].*|.*[BA].*E.*)
But it's probably going to be more easy to understand to do some kind of replacement, for instance make a loop that replaces one letter at a time with '', and check the size of the new string each time - if it changes the size of the string twice, then you've got two of your desired characters. EDIT: actually, if you know the set of desired characters at runtime before you do the check, NullUserException had it right in his comment - indexOf or contains will be more efficient and probably more readable than this.
Note that if your set of desired characters is unknown at compile time (or at least pre-string-checking at runtime), the second option is preferable - if you're looking for any characters, just replace all occurrences of the first character in a while(str.length > 0) loop - the number of times it goes through the loop is the number of different characters you've got.
Mark explicitly the repetition of desired letters,
It would look like this :
\b[ABE]{1,3}\b
It matches AAE, EEE, AEE but not AAAA, AAEE

Efficient way to search for a set of strings in a string in Java

I have a set of elements of size about 100-200. Let a sample element be X.
Each of the elements is a set of strings (number of strings in such a set is between 1 and 4). X = {s1, s2, s3}
For a given input string (about 100 characters), say P, I want to test whether any of the X is present in the string.
X is present in P iff for all s belong to X, s is a substring of P.
The set of elements is available for pre-processing.
I want this to be as fast as possible within Java. Possible approaches which do not fit my requirements:
Checking whether all the strings s are substring of P seems like a costly operation
Because s can be any substring of P (not necessarily a word), I cannot use a hash of words
I cannot directly use regex as s1, s2, s3 can be present in any order and all of the strings need to be present as substring
Right now my approach is to construct a huge regex out of each X with all possible permutations of the order of strings. Because number of elements in X <= 4, this is still feasible. It would be great if somebody can point me to a better (faster/more elegant) approach for the same.
Please note that the set of elements is available for pre-processing and I want the solution in java.
You can use regex directly:
Pattern regex = Pattern.compile(
"^ # Anchor search to start of string\n" +
"(?=.*s1) # Check if string contains s1\n" +
"(?=.*s2) # Check if string contains s2\n" +
"(?=.*s3) # Check if string contains s3",
Pattern.DOTALL | Pattern.COMMENTS);
Matcher regexMatcher = regex.matcher(subjectString);
foundMatch = regexMatcher.find();
foundMatch is true if all three substrings are present in the string.
Note that you might need to escape your "needle strings" if they could contain regex metacharacters.
It sounds like you're prematurely optimising your code before you've actually discovered a particular approach is actually too slow.
The nice property about your set of strings is that the string must contain all elements of X as a substring -- meaning we can fail fast if we find one element of X that is not contained within P. This might turn out a better time saving approach than others, especially if the elements of X are typically longer than a few characters and contain no or only a few repeating characters. For instance, a regex engine need only check 20 characters in 100 length string when checking for the presence of a 5 length string with non-repeating characters (eg. coast). And since X has 100-200 elements you really, really want to fail fast if you can.
My suggestion would be to sort the strings in order of length and check for each string in turn, stopping early if one string is not found.
Looks like a perfect case for the Rabin–Karp algorithm:
Rabin–Karp is inferior for single pattern searching to Knuth–Morris–Pratt algorithm, Boyer–Moore string search algorithm and other faster single pattern string searching algorithms because of its slow worst case behavior. However, Rabin–Karp is an algorithm of choice for multiple pattern search.
When the preprocessing time doesn't matter, you could create a hash table which maps every one-letter, two-letter, three-letter etc. combination which occurs in at least one string to a list of strings in which it occurs.
The algorithm to index a string would look like that (untested):
HashMap<String, Set<String>> indexes = new HashMap<String, Set<String>>();
for (int pos = 0; pos < string.length(); pos++) {
for (int sublen=0; sublen < string.length-pos; sublen++) {
String substring = string.substr(pos, sublen);
Set<String> stringsForThisKey = indexes.get(substring);
if (stringsForThisKey == null) {
stringsForThisKey = new HashSet<String>();
indexes.put(substring, stringsForThisKey);
}
stringsForThisKey.add(string);
}
}
Indexing each string that way would be quadratic to the length of the string, but it only needs to be done once for each string.
But the result would be constant-speed access to the list of strings in which a specific string occurs.
You are probably looking for Aho-Corasick algorithm, which constructs an automata (trie-like) from the set of strings (dictionary), and try to match the input string to the dictionary using this automata.
You might want to consider using a "Suffix Tree" as well. I haven't used this code, but there is one described here
I have used proprietary implementations (that I no longer even have access to) and they are very fast.
One way is to generate every possible substring and add this to a set. This is pretty inefficient.
Instead you can create all the strings from any point to the end into a NavigableSet and search for the closest match. If the closest match starts with the string you are looking for, you have a substring match.
static class SubstringMatcher {
final NavigableSet<String> set = new TreeSet<String>();
SubstringMatcher(Set<String> strings) {
for (String string : strings) {
for (int i = 0; i < string.length(); i++)
set.add(string.substring(i));
}
// remove duplicates.
String last = "";
for (String string : set.toArray(new String[set.size()])) {
if (string.startsWith(last))
set.remove(last);
last = string;
}
}
public boolean findIn(String s) {
String s1 = set.ceiling(s);
return s1 != null && s1.startsWith(s);
}
}
public static void main(String... args) {
Set<String> strings = new HashSet<String>();
strings.add("hello");
strings.add("there");
strings.add("old");
strings.add("world");
SubstringMatcher sm = new SubstringMatcher(strings);
System.out.println(sm.set);
for (String s : "ell,he,ow,lol".split(","))
System.out.println(s + ": " + sm.findIn(s));
}
prints
[d, ello, ere, hello, here, ld, llo, lo, old, orld, re, rld, there, world]
ell: true
he: true
ow: false
lol: false

To split only Chinese characters in java

I am writing a java application; but stuck on this point.
Basically I have a string of Chinese characters with ALSO some possible Latin chars or numbers, lets say:
查詢促進民間參與公共建設法(210BOT法).
I want to split those Chinese chars except the Latin or numbers as "BOT" above. So, at the end I will have this kind of list:
[ 查, 詢, 促, 進, 民, 間, 參, 與, 公, 共, 建, 設, 法, (, 210, BOT, 法, ), ., ]
How can I resolve this problem (for java)?
Chinese characters lies within certain Unicode ranges:
2F00-2FDF: Kangxi
4E00-9FAF: CJK
3400-4DBF: CJK Extension
So all you basically need to do is to check if the character's codepoint lies within the known ranges. This example is a good starting point to write a stackbased parser/splitter, you only need to extend it to separate digits from latin letters, which should be obvious enough (hint: Character#isDigit()):
Set<UnicodeBlock> chineseUnicodeBlocks = new HashSet<UnicodeBlock>() {{
add(UnicodeBlock.CJK_COMPATIBILITY);
add(UnicodeBlock.CJK_COMPATIBILITY_FORMS);
add(UnicodeBlock.CJK_COMPATIBILITY_IDEOGRAPHS);
add(UnicodeBlock.CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT);
add(UnicodeBlock.CJK_RADICALS_SUPPLEMENT);
add(UnicodeBlock.CJK_SYMBOLS_AND_PUNCTUATION);
add(UnicodeBlock.CJK_UNIFIED_IDEOGRAPHS);
add(UnicodeBlock.CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A);
add(UnicodeBlock.CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B);
add(UnicodeBlock.KANGXI_RADICALS);
add(UnicodeBlock.IDEOGRAPHIC_DESCRIPTION_CHARACTERS);
}};
String mixedChinese = "查詢促進民間參與公共建設法(210BOT法)";
for (char c : mixedChinese.toCharArray()) {
if (chineseUnicodeBlocks.contains(UnicodeBlock.of(c))) {
System.out.println(c + " is chinese");
} else {
System.out.println(c + " is not chinese");
}
}
Good luck.
Diclaimer: I'm a complete Lucene newbie.
Using the latest version of Lucene (3.6.0 at the time of writing) I manage to get close to the result you require.
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_36, Collections.emptySet());
List<String> words = new ArrayList<String>();
TokenStream tokenStream = analyzer.tokenStream("content", new StringReader(original));
CharTermAttribute termAttribute = tokenStream.addAttribute(CharTermAttribute.class);
try {
tokenStream.reset(); // Resets this stream to the beginning. (Required)
while (tokenStream.incrementToken()) {
words.add(termAttribute.toString());
}
tokenStream.end(); // Perform end-of-stream operations, e.g. set the final offset.
}
finally {
tokenStream.close(); // Release resources associated with this stream.
}
The result I get is:
[查, 詢, 促, 進, 民, 間, 參, 與, 公, 共, 建, 設, 法, 210bot, 法]
Here's an approach I would take.
You can use Character.codePointAt(char[] charArray, int index) to return the Unicode value for a char in your char array.
You will also need a mapping of Latin Unicode characters.
If you look in the source of Character.UnicodeBlock, the full LATIN block is the interval [0x0000, 0x0249]. So basically you check if your Unicode code point is somewhere within that interval.
I suspect there is a way to just use a Character.Subset to check if it contains your char, but I haven't looked into that.

Categories

Resources