I have following method:
public String exportAsCsv(CqlQuery query) {
Iterator<String> result = queryService.execute(.....);
StringBuilder buf = new StringBuilder();
for (String nextLine : result) {
buf.append(nextLine);
}
return buf.toString();
}
It executes some query which returns Iterator<String> - it contains gigabytes of data, so appending it to StringBuilder is not the best idea...
I would like to change my method so that it returns InputStream instead.
This could be one possible implementation (pseudo code):
public InputStream exportAsCsv(CqlQuery query) {
final Iterator<String> result = queryService.execute(query,false);
return new MagicalInputStream(){
#Overwrite
byte[] readNext() {
if(!result.hasNext()) {
return null;
} else {
return result.next().getBytes();
}
}
}
}
I am looking for InputStream where I have to implement abstract method (like byte[] readNext()), which will be used to read data chunks - one by one. So this input stream has to buffer read chunk, stream it back, and when its buffer is empty it should read next chunk.
The idea is, that I read next elements from Iterator ONLY when "client" rads next bytes from input stream.
Or there might be another possibility to change my method so that it does return InputStream instead of String - any ideas?
The whole InputStream implementation could be avoided if you allow your method to accept an java.io.Writer. Instead of appending Strings to the in-memory StringBuilder, you append them to the provided Writer.
public void exportAsCsv(CqlQuery query, Writer writer) {
Iterator<String> result = queryService.execute(.....);
for (String nextLine : result) {
writer.append(nextLine);
}
}
If you really want an InputStream, though, you could try something like this:
public InputStream exportAsCsv(CqlQuery query) {
Iterator<String> result = queryService.execute(.....);
return new SequenceInputStream(asStreamEnum(result));
}
private Enumeration<InputStream> asStreamEnum(final Iterator<String> it) {
return new Enumeration<InputStream>() {
#Override
public boolean hasMoreElements() {
return it.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new ByteArrayInputStream(it.next().getBytes("UTF-8"));
} catch (UnsupportedEncodingException ex) {
throw new RuntimeException(ex);
}
}
};
}
I haven't actually tested this approach yet, so be warned; conceptually, though, I think this is what you're after.
Related
I have a requirement to fetch and write thousands of records from DB, convert to json and write to zip file. I am able to write with below implementation as well.
#Override
public StreamingResponseBody fetchAndWriteOrderAllocationsToFile(LocalDate date) {
int orderCount = orderDao.getOrderCount(date);
// Pagination
return outputStream -> {
try (ZipOutputStream zipOut = new ZipOutputStream(new BufferedOutputStream(outputStream))) {
zipOut.putNextEntry(new ZipEntry("report.txt"));
int startIndex = 0, count = orderCount;
do {
List<String> orderSerialNos = orderDao.getOrderSerialNos(date, startIndex, PAGESIZE);
orderSerialNos.parallelStream().forEach(orderSerialNo -> {
try {
writeToStream(zipOut, allocationsService.getAllocationsFromOrderItems(orderSerialNo), objectMapper);
} catch (Exception e) {
writeToStream(zipOut, Allocations.builder()
.orderSerialNo(orderSerialNo)
.build(), objectMapper);
}
});
count -= PAGESIZE;
startIndex += PAGESIZE;
} while (count > 0);
}
};
}
private static void writeToStream(OutputStream outputStream,
Object result,
ObjectMapper objectMapper) {
try {
objectMapper.writeValue(outputStream, result);
} catch (IOException e) {
log.error("Error writing results to stream", e);
}
}
However I would like to introduce a new line character(or comma) after every json being written to file.
The closest I got was overriding PrettyPrinter.writeEndObject method to something like below and use the overridden PrettyPrinter class. This obviously adds new line char to all the sub objects of json as well as every new json. The expectation is to have the new line character only after each json.
Is there any way to accomplish this?
#Override
public void writeEndObject(JsonGenerator g, int nrOfEntries) throws IOException {
g.writeRaw("}\n");
}
Above code gives:
{"orderSerialNo":"1234-ABCD","orderId":1,"shippingAllocations":[{"recipientId":25,"itemId":3893814,"itemSku":"ABC","quantity":1,"shippingItemId":3893815,"shippingSku":"DEF","shipperId":66,"allocation":0}
],"sdAllocations":[],"idAllocations":[]}
{"orderSerialNo":"6789-EFGH","orderId":2,"shippingAllocations":[{"recipientId":45,"itemId":88,"itemSku":"BLAH","quantity":1,"shippingItemId":78,"shippingSku":"HELP","shipperId":99,"allocation":7.95}
],"sdAllocations":[],"idAllocations":[]}
The expectation is:
{"orderSerialNo":"1234-ABCD","orderId":1,"shippingAllocations"[{"recipientId":25,"itemId":3893814,"itemSku":"ABC","quantity":1,"shippingItemId":3893815,"shippingSku":"DEF","shipperId":66,"allocation":0}],"sdAllocations":[],"idAllocations":[]}
{"orderSerialNo":"6789-EFGH","orderId":2,"shippingAllocations":[{"recipientId":45,"itemId":88,"itemSku":"BLAH","quantity":1,"shippingItemId":78,"shippingSku":"HELP","shipperId":99,"allocation":7.95}],"sdAllocations":[],"idAllocations":[]}
Im working on a task that requires me to read from a .csv file using stream API, go over each line and construct an object with the lines. The object class is called Planet and is:
public Planet(String name, long inhabitants, boolean stargateAvailable, boolean dhdAvailable, List<String> teamsVisited) {
}
public String getName() {
return name;
}
public long getInhabitants() {
return inhabitants;
}
public boolean isStargateAvailable() {
return stargateAvailable;
}
public boolean isDhdAvailable() {
return dhdAvailable;
}
public List<String> getTeamsVisited() {
return teamsVisited;
}
#Override
public String toString() {
return name;
}
}
So using stream API to go over each of the lines of the .cvs file i need to create objects of class Planet.
I havent made any progress at all because I really am not sure how to use stream API
public class Space {
public List<Planet> csvDataToPlanets(String filePath) {
return null;
}
try the below snippet.
File inputF = new File(inputFilePath);
InputStream inputFS = new FileInputStream(inputF);
BufferedReader br = new BufferedReader(new InputStreamReader(inputFS));
// skip the header of the csv
inputList = br.lines().skip(1).map(mapToItem).collect(Collectors.toList());
br.close();
For more information check this link
public static void main(String args[]) {
String fileName = "<Your File Path"";
try (Stream<String> stream = Files.lines(Paths.get(fileName))) {
stream.forEach(<Method to split the string based with ',' as delimiter and call Constructor using Reflection API>);
} catch (IOException e) {
e.printStackTrace();
}
}
I am trying to know the best possible way to sort a file.txt lines in a Java collection.
Using orderedSet removes duplication and I don't want that.
PriorityQueue does the job but I need my class to be Iterable and using PriorityQueue.Iterator does not give sorted results.
Now I am confused with using Arrays.sort or going with this approach:
using PriorityQueue when reading lines from text then copying the final Queue on an array to use its Iterator?
public class FileSorter implements Iterable<String> {
// this sorted set contains the lines
private PriorityQueue<String> lines0 = new PriorityQueue<>() ;
private ArrayList<String> lines = new ArrayList<>();
public void readFiles (String[] filePaths) throws IOException {
BufferedReader buf = null;
String line ;
for (String path:filePaths) {
//opening the file
buf = new BufferedReader(new FileReader(new File(path)));
//iterating through the lines and adding them the collection
while ((line = buf.readLine()) != null) {
if(line.trim().length() > 0) { //no blank lines
lines0.add(line);
}
}
};
//closing the buffer
buf.close();
while (!lines0.isEmpty()){
lines.add(lines0.poll());
}
}
public Iterator<String> iterator() {
return lines.iterator();
}
}
Thank you.
I think implementing Iterable is not the best approach because you should prefer composition over inheritance, and it's 2017 after all; no one implements their own collection classes anymore. That said, how about the following?
public class Main {
public static void main(String[] args) throws IOException, URISyntaxException {
for (String line : new FileSorter(new File(Main.class.getResource("test.txt").toURI()).toPath())) {
System.out.println(line);
}
}
static class FileSorter implements Iterable<String> {
private final Path path;
FileSorter(Path path) {
this.path = path;
}
#Override
public Iterator<String> iterator() {
try {
return Files.lines(path)
.sorted()
.iterator();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
}
}
Given a file test.txt in the same dir as the class Main:
a
b
a
c
The above program prints:
a
a
b
c
Iterable has different semantic than Stream because the former can be reused, while the latter can only be used once (until a terminal operation). Thus, my implementation reads the file every time you call iterator(). I didn't attempt to optimize it because you didn't ask for it, and premature optimization is the root of all evil.
It seems like there're 2 ways to write the content of a JSON object to a writer. I can either do
myWriter.write(myJSONObj.toString());
Or
myJSONObj.write(myWriter);
Is there any reason why anyone would choose one way over the other?
According to the source code:
public String toString() {
try {
return this.toString(0);
} catch (Exception e) {
return null;
}
}
public String toString(int indentFactor) throws JSONException {
StringWriter w = new StringWriter();
synchronized (w.getBuffer()) {
return this.write(w, indentFactor, 0).toString();
}
}
public Writer write(Writer writer) throws JSONException {
return this.write(writer, 0, 0);
}
so basically, the first approach:
myWriter.write(myJSONObj.toString());
Creates a StringWriter.
Passes the writer to write(Writer writer, int indentFactor, int indent).
The JSON content get written to the writer.
The content of the writer is converted via StringWriter#toString().
The final string get written to myWriter.
The second approach:
myJSONObj.write(myWriter);
Passes the writer to write(Writer writer, int indentFactor, int indent).
The JSON content get written to the writer.
I have got this piece of code (I didn't write, just maintaining):
public class MyMultipartResolver extends CommonsMultipartResolver{
public List parseEmptyRequest(HttpServletRequest request) throws IOException, FileUploadException {
String contentType = request.getHeader(CONTENT_TYPE);
int boundaryIndex = contentType.indexOf("boundary=");
InputStream input = request.getInputStream();
byte[] boundary = contentType.substring(boundaryIndex + 9).getBytes();
MultipartStream multi = new MultipartStream(input, boundary);
multi.setHeaderEncoding(getHeaderEncoding());
ArrayList items = new ArrayList();
boolean nextPart = multi.skipPreamble();
while (nextPart) {
Map headers = parseHeaders(multi.readHeaders());
// String fieldName = getFieldName(headers);
String subContentType = getHeader(headers, CONTENT_TYPE);
if (subContentType == null) {
FileItem item = createItem(headers, true);
OutputStream os = item.getOutputStream();
try {
multi.readBodyData(os);
} finally {
os.close();
}
items.add(item);
} else {
multi.discardBodyData();
}
nextPart = multi.readBoundary();
}
return items;
}
}
I am using commons-fileupload.jar version 1.2.1 and obviously the code is using some deprecated methods...
Anyway, while trying to use this code to upload a very large file (780 MB) I get this:
org.apache.commons.fileupload.MultipartStream$MalformedStreamException: Stream ended unexpectedly
at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:983)
at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:887)
at java.io.InputStream.read(InputStream.java:89)
at org.apache.commons.fileupload.util.Streams.copy(Streams.java:94)
at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64)
at org.apache.commons.fileupload.MultipartStream.readBodyData(MultipartStream.java:593)
at org.apache.commons.fileupload.MultipartStream.discardBodyData(MultipartStream.java:619)
that is thrown from 'multi.discardBodyData();' line.
My question:
How can I avoid this error and be able to be able to succeed collecting the FileItems?
catch
(org.apache.commons.fileupload.MultipartStream.MalformedStreamException e)
{
e.printStackTrace();
return ERROR;
}
Catch the exception and handle it via ..either InputStream or Return Error use it in struts action tag