How to break from forEach loop when exception occur - java

I have a forEach loop inside a for loop. Something like this.
for (Iterator iterator = notLoadedFiles.iterator(); iterator.hasNext();) {
SwapReport swapReport = (SwapReport) iterator.next();
Reader reader = Files.newBufferedReader(Paths.get(swapReport.getFilePath()));
CSVParser csvParser = new CSVParser(reader, CSVFormat.DEFAULT);
List<CSVRecord> initList = csvParser.getRecords();
CSVRecord csvRecord = initList.stream().filter(record -> record.get(0).endsWith("#xxx.com")).findAny().orElse(null);
if (csvRecord != null) {
// partition the list
List<List<CSVRecord>> output = ListUtils.partition(initList, chunkSize);
// Set data by slice
output.stream().forEach(slice -> {
String inParams = slice.stream().filter(record -> !record.get(0).endsWith("#xxx.com")).map(record -> "'" + record.get(0) + "'").collect(Collectors.joining(",")).toString();
try {
// Retrieve data for one slice
setAccountData(inParams);
}
} catch (SQLException e) {
// How to break ?
}
});
}
}
The forEach loop will execute the setAccount method. The try catch block inside the forEach loop is because setAccount throwing SQLException.
What I want to do is to stop(break) from the forEach loop in case there is an exception.
If there is an exception the inner loop will stop and for loop (outer loop) will proceed to the next element in the iterator.
How can I do that ?
Thank you

Sadly, function .ForEach() did not provided a good way to break the loop. Although adding a return statement inside this loop could let the .ForEach() to continue next loop, but I'm sure that's not what you are asking.
Solution
Therefore, you can try to use a feature call .takeWhile() introduced in Java 9. For example, the code will be:
boolean varNameYouPrefer = true;
output.stream().takeWhile(slice -> varNameYouPrefer).forEach(slice -> {
String inParams = slice.stream().filter(record -> !record.get(0).endsWith("#xxx.com")).map(record -> "'" + record.get(0) + "'").collect(Collectors.joining(",")).toString();
try {
// Retrieve data for one slice
setAccountData(inParams);
}
} catch (SQLException e) {
// To break, set varNameYouPrefer to false.
varNameYouPrefer = false;
}
});
References
Java Stream Document
Example of using takeWhile

Related

How to use multi-threading to parallellize a for loop in Java?

I am writing a code which picks the multiple API call details from a file and executes those ones by one and provides the response data in an ArrayList. Below is my current code.
ArrayList<APICallDetails> apiCallDetailsArray = new ArrayList<>();
APICallDetails apiCallDetails = new APICallDetails();
for (count= 1; count <= callsCount; count++){
try{
apiCallDetails = new APICallDetails();
apiCallDetails.setName(property.getPropertyReader(callName+"_"+count+"_Name", propsFile));
apiCallDetails.setHost(marketConfigs.getRawJson().get(property.getPropertyReader(callName+"_"+count+"_Host", propsFile)).toString().replaceAll("\"", ""));
apiCallDetails.setPath(property.getPropertyReader(callName+"_"+count+"_Path", propsFile));
apiCallDetails.setMethod(property.getPropertyReader(callName+"_"+count+"_Method", propsFile));
apiCallDetails.setBody(property.getPropertyReader(callName+"_"+count+"_Body", propsFile));
apiCallDetails = sendAPIRequest.mwRequestWithoutBody(apiCallDetails, marketConfigs);
BufferedWriter out = null;
try {
out = new BufferedWriter ( new FileWriter ( "C:\\file"+count+".html"));
out.write("something");
out.close();
} catch (IOException e) {
e.printStackTrace();
logger.error(new Date()+" - Error in "+getClass()+".apiCallRequester() flow: "+e.toString());
}
apiCallDetailsArray.add(apiCallDetails);
}catch(NullPointerException e){
e.printStackTrace();
logger.error(new Date()+" - Error in "+getClass()+".apiCallRequester() flow: "+e.toString());
}
}
As there are more API calls, this is taking the sum of the response time of all the calls. I want these calls to run parallelly and store the response data in an ArrayList which I can use further.
I am new to Java so, can someone please help me with this?
You can use parallel streams. The following invocation will invoke in parallel createAPICallDetails(idx) and add their return objects into a List:
List<APICallDetails> result = IntStream.range(0, callsCount)
.parallel()
.mapToObj(idx -> createAPICallDetails(idx))
.collect(Collectors.toList());
So, the only thing left for you is to implement the logic of:
APICallDetails createAPICallDetails(int index) { ... }
To create a single object of your APICallDetails given the index argument, so it can be used in the previous lambda.
Hope this helps.

How to deal with same keys in HashMaps ?

Hello fellow soldiers.
Obviously keys in hashmaps are unique. However, I've been trying to write a code that reads a csv file and then puts the key and value in the map. However, there are keys the same (every key is like 15 times in the csv file). In that case, it should make a sum of the values, and just return the key once.
How to do that? My code right now is as follows.
BufferedReader br = null;
String line;
try {
br = new BufferedReader(new FileReader(filepath));
} catch (FileNotFoundException fnfex) {
System.out.println(fnfex.getMessage() + "Bestand niet gevonden!");
System.exit(0);
}
//this is where we read lines
try {
while((line = br.readLine()) != null) {
String[] splitter = line.split(cvsSplitBy);
if(splitter[0] != "Voertuig") {
alldataMap.put(splitter[0], splitter[8]);
}
//MIGHT BE JUNK, DONT KNOW YET
/*if((splitter[0].toLowerCase()).contains("1")){
double valuekm = Double.parseDouble(splitter[8]);
license1 += valuekm;
System.out.println(license1);
}
else {
System.out.println("not found");
}*/
}
System.out.println(alldataMap);
TextOutput();
} catch (IOException ioex) {
System.out.println(ioex.getMessage() + " Error 1");
} finally {
System.exit(0);
}
So if I have the following info (in this case its the 0th and 8th word read every line in the csv file)
Apples; 299,9
Bananas; 300,23
Apples; 3912,1
Bananas;342
Bananas;343
It should return
Apples;Total
Bananas;Total
Try the following:
if( alldataMap.containsKey(splitter[0]) ) {
Double sum = alldataMap.remove(splitter[0]) + Double.parseDouble(splitter[8]);
allDataMap.put(splitter[0], sum );
} else {
alldataMap.put(splitter[0], Double.valueOf(splitter[8]) );
}
You can use putIfAbsent and compute since Java 8:
Map<String, Integer> myMap = new HashMap<>();
//...
String fruitName = /*whatever*/;
int qty = /*whatever*/;
myMap.putIfAbsent(fruitName, 0);
myMap.compute(fruitName, (k, oldQty) -> oldQty + qty);
You can use Map#containsKey() to check for an existing mapping, then if there is one use Map#get() to retrieve the value and add the new one, and finally Map#put() to store the sum:
if(map.containsKey(key))
map.put(key, map.get(key)+value);
else
map.put(key, value);
See here for the documentation of those methods.
i would use the merge for a map :
alldataMap.merge(splitter[0], Double.valueOf(splitter[8]), (oldVal, newVal) -> oldVal + newVal);
From doc:
If the specified key is not already associated with a value or is associated with null, associates it with the given non-null value. Otherwise, replaces the associated value with the results of the given remapping function, or removes if the result is null. This method may be of use when combining multiple mapped values for a key. For example, to either create or append a String msg to a value mapping:
I won't suggest the way to it in a loop because that's already done, but I'd suggest a Streams solution, in a unique line :
Map<String, Double> alldataMap = new HashMap<>();
try {
alldataMap =
Files.lines(Paths.get("", filepath))
.map(str -> str.split(cvsSplitBy))
.filter(splitte -> !splitte[0].equals("Voertuig"))
.collect(Collectors.toMap(sp -> sp[0],
sp -> Double.parseDouble(sp[8].replaceAll(",", ".")),
(i1, i2) -> i1 + i2));
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(alldataMap); // {Apples=4212.0, Bananas=985.23}
The steps are the same :
Iterate over the lines
split on the cvsSplitBy
remove lines which starts with Voertuig (! use .equals() and not !=)
build the map following 3 rules :
the key is the first String
the value is second String parsed as Double
if merge is required : sum both
Edit, as nobody propose the use of .getOrDefault() I give it
while ((line = br.readLine()) != null) {
String[] splitter = line.split(cvsSplitBy);
if (!splitter[0].equals("Voertuig")) {
alldataMap.put(splitter[0],
alldataMap.getOrDefault(splitter[0], 0.0) +
Double.parseDouble(splitter[8].replaceAll(",", ".")));
}
}
If tke key already exists, it'll sum, it the key does not exists it'll sum the value with a 0

how can I stop the execution of the while loop with a keyboard input?

I have several threads. Each thread haves a while(true) loop inside, where I add cycle-by-cycle text. I don't find a good method to change the while(true) loop with a flag, in such a way that I can close the file when I come out from the cycle. I want to do this when I type something for example, or when I press the Eclipse red button.
This is the constructor (Node is a Thread)
public Node(Channel c, int address) {
my_address=address;
try {
writer = new CSVWriter(new FileWriter(my_address + "_id.csv"), ',', ' ' , ' ' ,"\n");
writer2 = new CSVWriter(new FileWriter(my_address + "_label.csv"), ',', ' ' , ' ' ,"\n");
String[] entries = "num#state#duration#event#condition#condition result#action1#action2#backoff value".split("#");
writer.writeNext(entries);
writer2.writeNext(entries);
} catch (IOException e) {
e.printStackTrace();
}
}
This is the loop in which I modify the file:
while (true) {
//write id value
String id_to_split = num+"#"+fsm.current_state.nome+"#"+tempo_minore+"#"+
fsm.current_transition.e.getId()+"#"+ fsm.current_transition.c.getId()+"#"+
fsm.current_transition.c.getFlag()+"#"+fsm.current_transition.a.getId()+"#"+
fsm.current_transition.a2.getId()+"#"+backoff;
String[] id_entries = id_to_split.split("#");
writer.writeNext(id_entries);
//write name
String label_to_split = num+"#"+fsm.current_state.nome+"#"+tempo_minore+"#"+
fsm.current_transition.e.getLabel()+"#"+fsm.current_transition.c.getLabel()+"#"+
fsm.current_transition.c.getFlag()+"#"+fsm.current_transition.a.getLabel()+"#"+
fsm.current_transition.a2.getLabel()+"#"+backoff;
String[] label_entries = label_to_split.split("#");
writer2.writeNext(label_entries);
num++;
}
closeCSVs();
}
The method closeCSVs():
public void closeCSVs() {
try {
writer.close();
writer2.close();
} catch (IOException e) {
e.printStackTrace();
}
}
If I understood your question correctly, what you're looking for is either a try-with-resources block which works like following:
try(FileReader reader = new FileReader("path")) {
while(true) {
//use resources
}
}
You can use this with any Class that implements the AutoClosable-Interface (basically every class that offers a .close()-Method).
The resource will be closed automatically after the try-block is escaped.
Same solution different code would be to wrap with a classic try and adding a finally block to it.
try {
FileReader reader = new FileReader("path");
while(true) {
//use resources
}
} finally {
reader.close();
}
You may need to implement closing file in shutdown hook Runtime.addShutdownHook

Filter and map a java stream at the same time

I have a list of strings, each of which is represents a date. I'd like to map this list into a list of DateTime objects; however, if any of the strings are invalid (throws an exception) I'd like to log an error but not add it to the final List. Is there a way to do both the filtering and mapping at the same time?
This is what I currently have:
List<String> dateStrs = ...;
dateStrs.stream().filter(s -> {
try {
dateTimeFormatter.parseDateTime(s);
return true;
} catch (Exception e) {
log.error("Illegal format");
return false;
}
}.map(s -> {
return dateTimeFormatter.parseDateTime(s);
}.collect(...);
Is there any way to do this so that I don't have to parseDateTime twice for each element?
Thanks
To my opinion, it would be more idiomatically correct to use flatMap here:
dateStrs.stream().flatMap(s -> {
try {
return Stream.of(dateTimeFormatter.parseDateTime(s));
} catch (Exception e) {
return Stream.empty();
}
}).collect(...);
Here you can do everything in single operation.
Do the operations in the opposite order.
List<String> dateStrs = ...;
dateStrs.stream().map(s -> {
try {
return dateTimeFormatter.parseDateTime(s);
} catch (Exception e) {
return null;
}
}).filter(d -> d != null).collect(...);
(Too late I realize this is essentially the same as #wero but hopefully the code will make it clear.)
Updating for Java 9
Similar to Tagir's solution, you can map to an Optional, logging an error when the conversion fails to produce a value. Then, using the new Optional::stream method, you can flatMap on the optional, removing the failed conversions (empty optionals) from the stream.
dateStrs.stream()
.map(s -> {
try {
return Optional.of(dateTimeFormatter.parseDateTime(s));
} catch (Exception e) {
log.error("Illegal format: " + s);
return Optional.empty();
}
})
.flatMap(Optional::stream)
.collect(...);
You could first map the string to its parsed date. If you encounter a invalid date string, you log it and return null.
Then in a second step you filter for non null dates.

Processing log files, distribute work among worker threads, to find a simple sum

I want to distribute work among threads. Load parts of a log file and then distribute the work to process parts of the file.
In my simple example, I wrote 800,000 lines of data and had a number in each line. And then I sum the number.
When I run this example, I get totals that are slightly off. Do you see in this threading code where threads might not complete properly and hence won't total the numbers?
public void process() {
final String d = FILE;
FileInputStream stream = null;
try {
stream = new FileInputStream(d);
final BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
String data = "";
do {
final Stack<List<String>> allWork = new Stack<List<String>>();
final Stack<ParserWorkerAtLineThread> threadPool = new Stack<ParserWorkerAtLineThread>();
do {
if (data != null) {
final List<String> currentWorkToDo = new ArrayList<String>();
do {
data = reader.readLine();
if (data != null) {
currentWorkToDo.add(data);
} // End of the if //
} while(data != null && (currentWorkToDo.size() < thresholdLinesToAdd));
// Hand out future work
allWork.push(currentWorkToDo);
} // End of the if //
} while(data != null && (allWork.size() < numberOfThreadsAllowedInPool));
// Process the lines from the work to do //
// Hand out the work
for (final List<String> theCurrentTaskWork : allWork) {
final ParserWorkerAtLineThread t = new ParserWorkerAtLineThread();
t.data = theCurrentTaskWork;
threadPool.push(t);
}
for (final Thread workerAboutToDoWork : threadPool) {
workerAboutToDoWork.start();
System.out.println(" -> Starting my work... My name is : " + workerAboutToDoWork.getName());
} // End of the for //
// Waiting on threads to finish //
System.out.println("Waiting for all work to complete ... ");
for (final Thread waiting : threadPool) {
waiting.join();
} // End of the for //
System.out.println("Done waiting ... ");
} while(data != null); // End of outer parse file loop //
} catch(Exception e) {
e.printStackTrace();
} finally {
if (stream != null) {
try {
stream.close();
} catch (final IOException e) {
e.printStackTrace();
}
} // End of the stream //
} // End of the try - catch finally //
}
While you're at it, why not use a bounded BlockingQueue (ArrayBlockingQueue) of size thresholdLinesToAdd. This would be your producer code where you read the lines and use the method put on that queue to block until space is available.
As Chris mentionned before, use the Executors.newFixedThreadPool() to submit your work items on it. Your consumers would call take() to block until an element is available.
This is not a map/reduce. If you wanted a map/reduce, you would need another queue in the mix where you would publish keys to it. As an example, if you were to count the number of INFO and DEBUG occurances in your logs, your mapper would queue the extracted words every time it encounters it. The reducer would dequeue the mapper's output and increment the counter of each words. The result of your reducer would the word count for DEBUG and INFO.

Categories

Resources