I am new to Java 8 and when I am trying to put a filter for all those cities which contains one letter. It doesn't work for me. However, when I run it with old approach it works.
List<String> cityList = new ArrayList<>();
cityList.add("Noida");
cityList.add("Gurgaon");
cityList.add("Denver");
cityList.add("London");
cityList.add("Utah");
cityList.add("New Delhi");
System.out.println(cityList);
/* Prior to Java 8 Approach */
for (String city : cityList) {
if(city.contains("a")){
System.out.println(city + " contains letter a");
}
}
/* Java 8 Approach */
System.out.println(Stream.of(cityList).filter(str -> str.contains("a")).collect(Collectors.toList()));
Here is the output
Noida contains letter a
Gurgaon contains letter a
Utah contains letter a
[]
Can you please explain me where am I am making mistakes.
Thanks in advance !
You'll need to use cityList.stream() rather than Stream.of(cityList). Reason being that currently, Stream.of(cityList) returns a Stream<List<String>> whereas you want Stream<String>. You can still accomplish your task by using your current approach but you'll need to flatten the Stream<List<String>> into Stream<String> (I do not recommend as it causes un-necessary overhead hence it's better to use cityList.stream()).
That said, here is how you should go about accomplishing your task:
System.out.println(cityList.stream().filter(str -> str.contains("a")).collect(Collectors.toList()));
Stream.of(cityList) creates a Stream<List<String>> having a single element - your input List.
You need to use cityList.stream() in order to get a Stream<String> contaning the elements of your input List.
System.out.println(cityList.stream().filter(str -> str.contains("a")).collect(Collectors.toList()));
outputs
[Noida, Gurgaon, Utah]
The only reason you code passed compilation is that both List and String have a contains method that returns a boolean.
Related
In Java, we can use HashMap as a key in other HashMap. I'm using an associative array as a map in PHP. Now there is a need to store an associative array as a key in another associative array.
I asked ChatGPT and it presented a lengthy solution:
Suppose $map is an array that I want to use as a key:
ksort($map);
$key = serialize($map);
if(!isset($main[$key])){
$main[$key] = 0;
}
$main[$key]++
I'm running above code in a loop where map is:
on first iteration: [a=>1, b=>2, c=>3]
on second iteration: [b=>2, a=>1, c=>3]
after 2 two iterations
main is looking like
$main["serialized---key"] -> 2
Yes, I need to use ksort because the next $map could contain the same array but keys could be in a different order.
The above solution is working fine but it will slow down drastically on large inputs. I need a better way where I don't need to use ksort and serialization.
I also used spl_object_hash instead of serialize but it didn't work. please suggest an optimal approach just like hashmap in java.
Also, I used splObjectStorage class by type-casting the array into an object but it gives incorrect results.
Detailed problem:
What I'm actually trying to do?
I'm solving the following problem:
Given an array of strings strs, group the anagrams together. You can return the answer in any order.
An Anagram is a word or phrase formed by rearranging the letters of a different word or phrase, typically using all the original letters exactly once.
Example 1:
Input: strs =
["eat","tea","tan","ate","nat","bat"]
Output:
[["bat"],["nat","tan"],["ate","eat","tea"]]
My working code:
In short, I'm just grouping the strings based on their frequency map.
function groupAnagrams(array $arr){
$main = [];
$ret = [];
for($i=0; $i<count($arr); $i++){
$el = $arr[$i];
$map = [];
for($j=0; $j<strlen($el); $j++){
if(!isset($map[$el[$j]])){
$map[$el[$j]]=0;
}
$map[$el[$j]]++;
}
ksort($map);
$key = serialize($map);
if(!isset($main[$key])){
$main[$key] = [];
}
$main[$key][] = $el;
}
//return $main;
foreach($main as $key=>$val){
$ret[] = $val;
}
return $ret;
}
Here is the problem link: https://leetcode.com/problems/group-anagrams/
Here is a pic because newbie to Stackoverflow
I'm needing some help on a little issue i'm having rn.
private
My problem is on this line
private
The list is filled with some id's that are "whitelisted" and i'd like to compare the current's id (streamed) to all the value of the list. And only if the value match with one of the list ID, let it go to the next step.
Thx for help.
Doing a sequential search of quizList, for every Quiz object with the given category, is bad for performance.
You should convert quizList into a Set for faster lookup.
Set<Integer> quizIds = quizList.stream()
.map(Quiz::getId)
.collect(Collectors.toSet());
return hibernateQuizJpaRepository.findAllByCategorySetContaining(new HibernateQuizCategory(quizCategoryId))
.stream()
.filter(current -> quizIds.contains(current.getId()))
// ...
If quizList is Java collection then it does not have get method. In your filter to check if current.getId() exists in quizList you can use streams to do that
.filter(current -> quizList.stream()
.anyMatch( quiz -> Objects.equals( current.getId(), quiz.getId() ) ) )
Please stop use streams everywhere
.filter(quizList::contains)
I have 2 java lists
ArrayList<String> list1 = new ArrayList<String>();
ArrayList<String> list2 = new ArrayList<String>();
I load some data from the database to the first one and from diffrent database to the second one.
The strings in the lists look exactly the same:
3441134 China Ap F
3441134 China Ap F
But when I check:
if(list1.get(1).equals(list2.get(1))){
logger.info("true");
}
else{
logger.info("false")
}
I always get
false
Can somebody say why? I checked white spaces and it's the same too.
I think is something related with character encoding, you may be seeing the same string in console and debug but internally they have a extra invisible byte because of the encoding.
Try to look at: Invisible characters in Java Strings you will understand what i am saying.
Hello I am implementing a facebook-like program in java using hadoop framework (I am new to this). The main idea is that I have an input .txt file like this:
Christina Bill,James,Nick,Jessica
James Christina,Mary,Toby,Nick
...
The 1st is the user and the comma separated are his friends.
In the map function I scan each line of the file and emit the user with each one of his friends like
Christina Bill
Christina James
which will be converted to (Christina,[Bill,James,..])...
BUT in the description of my assignment it specifies that the Reduce function will receive as key the tuple of
two users, following by both their friends, you will count the
common ones and if that number is equal or greater than a
set number, like 5, you can safely assume that their
uncommon friends can be suggested. So how exactly do I pass a pair of users to the reduce function. I thought the input of the reduce function has to be the same as the output of the map function. I started coding this but I don't think this is the right approach. Any ideas?
public class ReduceFunction<KEY> extends Reducer<KEY,Text,KEY,Text> {
private Text suggestedFriend = new Text();
public void reduce(KEY key1,KEY key2, Iterable<Text> value1,Iterable<Text> value2,Context context){
}}
The output of the map phase should, indeed, be of the same type as the input of the reduce phase. This means that, if there is a requirement for the input of the reduce phase, you have to change your mapper.
The idea is simple:
map(user u,friends F):
for each f in F do
emit (u-f, F\f)
reduce(userPair u1-u2, friends F1,F2):
#commonFriends = |F1 intersection F2|
To implement this logic, you can just use a Text key, in which you concatenate the names of the users, using, e.g., the '-' character between them.
Note that in each reduce method, you will only receive two lists of friends, assuming that each user appears once in your input data. Then, you only have to compare the two lists for common names of friends.
Check if you can implement custom record reader, read two records at once from input file in mapper class. And then emit context.write(outkey, NullWritable.get()); from mapper class. Now in reducer class you need to handle two records came as a key(outkey) from mapper class. Good luck !
I'm trying to use Java 8 Lambda expressions and streams to parse some logs. I have one giant log file that has run after run. I want to split it into separate collections, one for each run. I do not know how many runs the log has in advanced. And to exercise my very weak lambda expressions muscles I'd like to do it in one pass through the list.
Here is my current implementation:
List<String> lines = readLines(fileDirectory);
Pattern runStartPattern = Pattern.compile("INFO: \\d\\d:\\d\\d:\\d\\d: Starting");
LinkedList<List<String>> testRuns = new LinkedList<>();
List<String> currentTestRun = new LinkedList<>(); // In case log starts in middle of run
testRuns.add(currentTestRun);
for(String line:lines){
if(runStartPattern.matcher(line).find()){
currentTestRun = new ArrayList<>();
testRuns.add(currentTestRun);
}
currentTestRun.add(line);
}
if(testRuns.getFirst().size()==0){ // In case log starts at a run
testRuns.removeFirst();
}
Basically something like TomekRekawek's solution here but with an unknown partition size to begin with.
There's no standard way to easily achieve this in Stream API, but my StreamEx library has a groupRuns method which can solve this pretty easily:
List<List<String>> testLines = StreamEx.of(lines)
.groupRuns((a, b) -> !runStartPattern.matcher(b).find())
.toList();
It groups the input elements based on some predicate which is applied to the pair of adjacent elements. Here we don't want to group the lines if the second line matches the runStartPattern. This works correctly regardless of whether the log starts in the middle of run or not. Also this feature works nice with parallel streams as well.