In vaadin 7, how does one lazily determine the file name when using FileDownloader?
final Button downloadButton = new Button("Download file");
FileDownloader downloader = new FileDownloader(new StreamResource(new StreamSource() {
#Override
public InputStream getStream () {
return new ByteArrayInputStream(expesiveCalculationOfContent());
}
}, "file.snub"));
downloader.extend(downloadButton);
In this code sample, clearly the filename
is rubbish
has to be known early on.
How can one lazily determine the filename of the downloaded file?
I do not know if it is dirty but this works: extending FileDownloader.handleConnectorRequest() to call the StreamResource.setFilename() prior to calling its super's method.
{
final Button downloadButton = new Button("Download file");
final StreamResource stream = new StreamResource(
new StreamSource() {
#Override
public InputStream getStream() {
return new ByteArrayInputStream("Hola".getBytes());
}
}, "badname.txt");
FileDownloader downloader = new FileDownloader(stream) {
#Override
public boolean handleConnectorRequest(VaadinRequest request,
VaadinResponse response, String path)
throws IOException {
stream.setFilename("better-name.txt");
return super
.handleConnectorRequest(request, response, path);
}
};
downloader.extend(downloadButton);
layout.addComponent(downloadButton);
}
This is the final solution I came up with:
/**
* This specializes {#link FileDownloader} in a way, such that both the file name and content can be determined
* on-demand, i.e. when the user has clicked the component.
*/
public class OnDemandFileDownloader extends FileDownloader {
/**
* Provide both the {#link StreamSource} and the filename in an on-demand way.
*/
public interface OnDemandStreamResource extends StreamSource {
String getFilename ();
}
private static final long serialVersionUID = 1L;
private final OnDemandStreamResource onDemandStreamResource;
public OnDemandFileDownloader (OnDemandStreamResource onDemandStreamResource) {
super(new StreamResource(onDemandStreamResource, ""));
this.onDemandStreamResource = checkNotNull(onDemandStreamResource,
"The given on-demand stream resource may never be null!");
}
#Override
public boolean handleConnectorRequest (VaadinRequest request, VaadinResponse response, String path)
throws IOException {
getResource().setFilename(onDemandStreamResource.getFilename());
return super.handleConnectorRequest(request, response, path);
}
private StreamResource getResource () {
return (StreamResource) this.getResource("dl");
}
}
If one assumes that lazily determining a file name means dynamically setting the filename irrespective of what the actual file system name may be, then the code below is what I'm using.
In the code below fileName points to the local file system file with a name that we want to change upon download. A usecase for this would be when one uploaded files to tmp with filenames containing some random characters that where not present in the original upload.
File file = new File(localFile);
final FileResource fileResource = new FileResource(file);
if (!file.exists()) {
throw new IllegalStateException();
}
final StreamResource stream = new StreamResource(
new StreamSource() {
#Override
public InputStream getStream() {
return fileResource.getStream().getStream();
}
}, "newname.txt");
FileDownloader fileDownloader = new FileDownloader(stream);
fileDownloader.extend(downloadButton);
Related
I am using Spring batch and have an ItemWriter as follows:
public class MyItemWriter implements ItemWriter<Fixing> {
private final FlatFileItemWriter<Fixing> writer;
private final FileSystemResource resource;
public MyItemWriter () {
this.writer = new FlatFileItemWriter<>();
this.resource = new FileSystemResource("target/output-teste.txt");
}
#Override
public void write(List<? extends Fixing> items) throws Exception {
this.writer.setResource(new FileSystemResource(resource.getFile()));
this.writer.setLineAggregator(new PassThroughLineAggregator<>());
this.writer.afterPropertiesSet();
this.writer.open(new ExecutionContext());
this.writer.write(items);
}
#AfterWrite
private void close() {
this.writer.close();
}
}
When I run my spring batch job, the items are written to file as:
Fixing{id='123456', source='TEST', startDate=null, endDate=null}
Fixing{id='1234567', source='TEST', startDate=null, endDate=null}
Fixing{id='1234568', source='TEST', startDate=null, endDate=null}
1/ How can I write just the data so that the values are comma separated and where it is null, it is not written. So the target file should look like this:
123456,TEST
1234567,TEST
1234568,TEST
2/ Secondly, I am having an issue where only when I exit spring boot application, I am able to see the file get created. What I would like is once it has processed all the items and written, the file to be available without closing the spring boot application.
There are multiple options to write the csv file. Regarding second question writer flush will solve the issue.
https://howtodoinjava.com/spring-batch/flatfileitemwriter-write-to-csv-file/
We prefer to use OpenCSV with spring batch as we are getting more speed and control on huge file example snippet is below
class DocumentWriter implements ItemWriter<BaseDTO>, Closeable {
private static final Logger LOG = LoggerFactory.getLogger(StatementWriter.class);
private ColumnPositionMappingStrategy<Statement> strategy ;
private static final String[] columns = new String[] { "csvcolumn1", "csvcolumn2", "csvcolumn3",
"csvcolumn4", "csvcolumn5", "csvcolumn6", "csvcolumn7"};
private BufferedWriter writer;
private StatefulBeanToCsv<Statement> beanToCsv;
public DocumentWriter() throws Exception {
strategy = new ColumnPositionMappingStrategy<Statement>();
strategy.setType(Statement.class);
strategy.setColumnMapping(columns);
filename = env.getProperty("globys.statement.cdf.path")+"-"+processCount+".dat";
File cdf = new File(filename);
if(cdf.exists()){
writer = Files.newBufferedWriter(Paths.get(filename), StandardCharsets.UTF_8,StandardOpenOption.APPEND);
}else{
writer = Files.newBufferedWriter(Paths.get(filename), StandardCharsets.UTF_8,StandardOpenOption.CREATE_NEW);
}
beanToCsv = new StatefulBeanToCsvBuilder<Statement>(writer).withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.withMappingStrategy(strategy).withSeparator(',').build();
}
#Override
public void write(List<? extends BaseDTO> items) throws Exception {
List<Statement> settlementList = new ArrayList<Statement>();
for (int i = 0; i < items.size(); i++) {
BaseDTO baseDTO = items.get(i);
settlementList.addAll(baseDTO.getStatementList());
}
beanToCsv.write(settlementList);
writer.flush();
}
#PreDestroy
#Override
public void close() throws IOException {
writer.close();
}
}
Since you are using PassThroughLineAggregator which does item.toString() for writing the object, overriding the toString() function of classes extending Fixing.java should fix it.
1/ How can I write just the data so that the values are comma separated and where it is null, it is not written.
You need to provide a custom LineAggregator that filters out null fields.
2/ Secondly, I am having an issue where only when I exit spring boot application, I am able to see the file get created
This is probably because you are calling this.writer.open in the write method which is not correct. You need to make your item writer implement ItemStream and call this.writer.open and this this.writer.close respectively in ItemStream#open and ItemStream#close
I want to process files with a flink stream in which two lines belong together. In the first line there is a header and in the second line a corresponding text.
The files are located on my local file system. I am using the readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo) method with a custom FileInputFormat.
My streaming job class looks like this:
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Read> inputStream = env.readFile(new ReadInputFormatTest("path/to/monitored/folder"), "path/to/monitored/folder", FileProcessingMode.PROCESS_CONTINUOUSLY, 100);
inputStream.print();
env.execute("Flink Streaming Java API Skeleton");
and my ReadInputFormatTest like this:
public class ReadInputFormatTest extends FileInputFormat<Read> {
private transient FileSystem fileSystem;
private transient BufferedReader reader;
private final String inputPath;
private String headerLine;
private String readLine;
public ReadInputFormatTest(String inputPath) {
this.inputPath = inputPath;
}
#Override
public void open(FileInputSplit inputSplit) throws IOException {
FileSystem fileSystem = getFileSystem();
this.reader = new BufferedReader(new InputStreamReader(fileSystem.open(inputSplit.getPath())));
this.headerLine = reader.readLine();
this.readLine = reader.readLine();
}
private FileSystem getFileSystem() {
if (fileSystem == null) {
try {
fileSystem = FileSystem.get(new URI(inputPath));
} catch (URISyntaxException | IOException e) {
throw new RuntimeException(e);
}
}
return fileSystem;
}
#Override
public boolean reachedEnd() throws IOException {
return headerLine == null;
}
#Override
public Read nextRecord(Read r) throws IOException {
r.setHeader(headerLine);
r.setSequence(readLine);
headerLine = reader.readLine();
readLine = reader.readLine();
return r;
}
}
As expected, the headers and the text are stored together in one object. However, the file is read eight times. So the problem is the parallelization. Where and how can I specify that a file is processed only once, but several files in parallel?
Or do I have to change my custom FileInputFormat even further?
I would modify your source to emit the available filenames (instead of the actual file contents) and then add a new processor to read a name from the input stream and then emit pairs of lines. In other words, split the current source into a source followed by a processor. The processor can be made to run at any degree of parallelism and the source would be a single instance.
I have a code which runs apache fop against xml content and xsl markup and gives me the apache Intermediate Format output:
StreamSource contentSource = new StreamSource(xmlContentStream);
StreamSource transformSource = new StreamSource(xslMarkupStream);
ByteArrayOutputStream outStream = new ByteArrayOutputStream();
Transformer xslfoTransformer = getTransformer(transformSource);
FOUserAgent foUserAgent = fopFactory.newFOUserAgent();
IFDocumentHandler targetHandler = foUserAgent.getRendererFactory().createDocumentHandler(
foUserAgent, MimeConstants.MIME_PDF);
FPSIFSerializer fpsSerializer = new FPSIFSerializer();
fpsSerializer.setContext(new IFContext(foUserAgent));
fpsSerializer.mimicDocumentHandler(targetHandler);
foUserAgent.setDocumentHandlerOverride(fpsSerializer);
Fop fop = fopFactory.newFop("application/X-fop-intermediate-format", foUserAgent, outStream);
DefaultHandler defaultHandler = fop.getDefaultHandler();
Result res = new SAXResult(defaultHandler);
xslfoTransformer.transform(contentSource, res);
Then I use that Intermediate Format file to render pdf and png files out of it.
I'm able to set up my own serilaizer here (FPSIFSerializer()).
I have several pages reports, but I don't need to process all of them. Is there any way to skip some pages or extract them from IntermediateFormat so I will be able e.g. to render only 1st page as png and then 2nd to pdf, etc ?
There
http://svn.apache.org/viewvc/xmlgraphics/fop/branches/archive/fop-1_1/examples/embedding/java/embedding/intermediate/ExampleConcat.java?view=markup
is an example of how to concatenate files via IFConcatenator, so I wonder about the best way to split the multipage file?
Thank_you!
The way I've done it is using custom document handler.
/**
* Custom Apache FOP Intermediate Format document handler which allows page skipping.
* Not thread safe.
*/
public class IFPageFilter extends IFDocumentHandlerProxy {
private static final Logger LOGGER = LoggerFactory.getLogger(IFPageFilter.class);
private int currentPage;
private final int desiredPage;
/**
* #param delegate The real document handler
* #param desiredPage the page you want to render (1-based). Other pages will be skipped.
*/
public IFPageFilter(final IFDocumentHandler delegate, final int desiredPage) {
super(delegate);
this.desiredPage = desiredPage;
}
#Override
public void startPage(final int index, final String name, final String pageMasterName, final Dimension size) throws IFException {
currentPage = index + 1;
if (currentPage == desiredPage) {
super.startPage(index, name, pageMasterName, size);
} else {
// do nothing
LOGGER.debug("Page skipped");
}
}
#Override
public IFPainter startPageContent() throws IFException {
if (currentPage == desiredPage) {
return super.startPageContent();
} else {
return EmptyPainter.getInstance();
}
}
#Override
public void endPageContent() throws IFException {
if (currentPage == desiredPage) {
super.endPageContent();
}
}
}
Then you can attach your handler like that:
final IFDocumentHandler targetHandler = FOP_FACTORY.getRendererFactory().createDocumentHandler(userAgent, mime);
final IFPageFilter documentHandler = new IFPageFilter(targetHandler, page);
final ByteArrayOutputStream mimeOut = new ByteArrayOutputStream(XSL_STREAM_BUFFER_SIZE);
IFUtil.setupFonts(documentHandler);
// Tell the target handler where to write the PDF to
targetHandler.setResult(new StreamResult(mimeOut));
try (final InputStream is = ifStream.toInputStream()) {
final Source src = new StreamSource(is);
new IFParser().parse(src, documentHandler, userAgent);
}
return mimeOut;
and you will get the only page you need in the output stream.
Class EmptyPainter is a dirty hack. It is empty implementation of apache IFPainter, it used here to skip page content and avoid NPE. I'm not happy about it, but that is the only way I was able to make it work.
Please note that I use FOP 1.1, and if you faced with such problems it worth to look at trunk - some of them already solved there. I guess dirty hack with EmptyPainter will not be necessary in trunk.
Please give tips if something could be done better here.
Thanks
I try make implementation for comparing the files before they are uploaded.
If file whith name is exist in system ask about create new version or just override it.
Here is the problem, how to get file name?
I can't use receiveUpload(), because after this method file is remove from upload component ?
The problem is that once you start an upload using the Upload component, it can only be interrupted by calling the interruptUpload() method, and you cannot resume anytime later.
The interruption is permanent.
This means you cannot pause in the middle of the upload to see if you already have the file in your system. You have to upload the file all the way.
Considering this drawback, you can sill check in your system if you have the file, after the upload finishes. If you have the file, you can show a confirmation dialog in which you decide wether to keep the file or overwrite.
The following is an example in which I check in the "system" (I just keep a String list with the filenames) if the file has already been uploaded:
public class RestrictingUpload extends Upload implements Upload.SucceededListener, Upload.Receiver {
private List<String> uploadedFilenames;
private ByteArrayOutputStream latestUploadedOutputStream;
public RestrictingUpload() {
setCaption("Upload");
setButtonCaption("Upload file");
addSucceededListener(this);
setReceiver(this);
uploadedFilenames = new ArrayList<String>();
}
#Override
public OutputStream receiveUpload(String filename, String mimeType) {
latestUploadedOutputStream = new ByteArrayOutputStream();
return latestUploadedOutputStream;
}
#Override
public void uploadSucceeded(SucceededEvent event) {
if (fileExistsInSystem(event.getFilename())) {
confirmOverwrite(event.getFilename());
} else {
uploadedFilenames.add(event.getFilename());
}
}
private void confirmOverwrite(final String filename) {
ConfirmDialog confirmDialog = new ConfirmDialog();
String message = String.format("The file %s already exists in the system. Overwrite?", filename);
confirmDialog.show(getUI(), "Overwrite?", message, "Overwrite", "Cancel", new ConfirmDialog.Listener() {
#Override
public void onClose(ConfirmDialog dialog) {
if (dialog.isConfirmed()) {
copyFileToSystem(filename);
}
}
});
}
private void copyFileToSystem(String filename) {
try {
IOUtils.write(latestUploadedOutputStream.toByteArray(), new FileOutputStream(filename));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e2) {
e2.printStackTrace();
}
}
private boolean fileExistsInSystem(String filename) {
return uploadedFilenames.contains(filename);
}
}
Note that I have used 2 external libraries:
Apache Commons IO 2.4 (http://mvnrepository.com/artifact/commons-io/commons-io/2.4) for writing to streams
ConfirmDialog from Vaadin Directory (https://vaadin.com/directory#addon/confirmdialog)
You can get the code snippet for this class from Gist: https://gist.github.com/gabrielruiu/9960772 which you can paste into your UI and test it out.
In my app I display pdf's using a ByteArrayResource.
This was working fine untill I started working with bigger files. The conversion to ByteArray keeps giving me an out of memory error.
This is how I do it at the moment:
File myPdf=new File(thePath);
FileInputStream fin = new FileInputStream(myPdf);
final byte fileContent[] = new byte[(int)myPdf.length()];
fin.read(fileContent);
fin.close();
ResourceReference rr = new ResourceReference(dePdf.getName()) {
#Override
public IResource getResource() {
return new ByteArrayResource("Application/pdf", fileContent);
}
};
if (rr.canBeRegistered()) {
getApplication().getResourceReferenceRegistry().registerResourceReference(rr);
}
return wmc;
Is there a better way to display a big file?
Try using ResourceStreamResource and FileResourceStream:
File myPdf=new File(thePath);
FileResourceStream frs = new FileResourceStream(myPdf);
ResourceStreamResource rsr = new ResourceStreamResource(frs);
rsr.setContentDisposition(ContentDisposition.ATTACHMENT);
rsr.setFileName(fileName);
//the same code for resource reference creation and registration
//...
Not entirely sure (never really used them myself) but a ContextRelativeResource may be an option. Perhaps something like:
final File myPdf=new File(thePath);
ResourceReference rr = new ResourceReference(dePdf.getName()) {
#Override
public IResource getResource() {
// You'll need to adjust the path here to be relative to your context
return new ContextRelativeResource(myPdf.getAbsolutePath());
}
};
if (rr.canBeRegistered()) {
getApplication().getResourceReferenceRegistry().registerResourceReference(rr);
}