I am using java and Apache Velocity 1.7 to evaluate template
Following is sample code:
public void internalEvaluate(Map<String, Object> customContext, String templateText) throws IOException {
// add custom context to VelocityContext
VelocityContext context = new VelocityContext();
for (Map.Entry<String, Object> entry : customContext.entrySet()) {
context.put(entry.getKey(), entry.getValue());
}
// define writer
StringWriter output = new StringWriter();
// define logTag
String logTag = "TestVTL";
// check input template text
if (templateText == null)
templateText = "$noDescription";
Velocity.evaluate(context, output, logTag, templateText);
// write output to file
saveToFile(out);
}
However, specific customContext or templateText may make a large output.
The output can be created as file but cannot be opened by editor.
Below are my questions
Q1.
I would like to limit or check size of output at runtime (or before calling evaluate()) and throw warning message about creating too large file.
Does Velocity provide configure or Api to do something like this?
Q2.
Evaluation process may take a long time.
I would like to know progress status in velocity evaluation process.
Is it possible to get progress information?
Best regards,
Since Velocity only sees a Writer class, it has no mean of counting the output size.
Your best option would be to implement a CustomStringWriter class that will throw an exception when its internal size has reached a certain limit.
Related
HI all i am trying to use this apache common exec, using this i am trying to create and write to a file.
the command line argument to write to a file is follows
Example: PDFAnnotator.exe "C:\My Documents\Test.pdf"
I have tried the following
public PrintResultHandler print(final File file, final long printJobTimeout, final boolean printInBackground)
throws IOException {
int exitValue;
ExecuteWatchdog watchdog = null;
PrintResultHandler resultHandler;
// build up the command line to using a 'java.io.File'
CommandLine commandLine = new CommandLine("C:\\Program Files (x86)\\Adobe\\Reader 11.0\\Reader\\AcroRd32.exe");
//CommandLine cmdLine = new CommandLine("C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe");
Map map = new HashMap();
map.put("file", new File("C:\\test\\invoice.pdf"));
commandLine.addArgument("/p");
commandLine.addArgument("/h");
commandLine.addArgument("${file}");
// create the executor and consider the exitValue '1' as success
final Executor executor = new DefaultExecutor();
executor.setExitValue(1);
// create a watchdog if requested
if (printJobTimeout > 0) {
watchdog = new ExecuteWatchdog(printJobTimeout);
executor.setWatchdog(watchdog);
}
// pass a "ExecuteResultHandler" when doing background printing
if (printInBackground) {
System.out.println("[print] Executing non-blocking print job ...");
resultHandler = new PrintResultHandler(watchdog);
executor.execute(commandLine, (Map<String, String>) resultHandler);
}
else {
System.out.println("[print] Executing blocking print job ...");
exitValue = executor.execute(commandLine);
resultHandler = new PrintResultHandler(exitValue);
}
return resultHandler;
}
it does not create any pdf file as an output can you please suggest.
It seems this code has been modified from the Apache Commons Exec tutorial code. There are a couple of modifications to the code it seems you have made which have caused problems.
Firstly, you have deleted the line
commandLine.setSubstitutionMap(map);
Without this line, you are creating the variable map, putting a single value into this map and then doing nothing further with it. Clearly, having a map that you never read any values out of achieves nothing. Reinstate this line, it's important.
The other problem is the line
executor.execute(commandLine, (Map<String, String>) resultHandler);
The difference between this code and the tutorial code is that you have added the cast to Map<String, String>. resultHandler is a PrintResultHandler, but this class does not implement Map<String, String> so this cast will fail.
I don't see why you have the cast at all. Get rid of it to leave you with:
executor.execute(commandLine, resultHandler);
If your code continues not to work, then I can't say what the reasons would be. Maybe the Adode Reader executable isn't where you think it is, maybe the file doesn't exist or doesn't have read permissions. In any case, suitable details should be written to standard output or standard error to help you further diagnose the problem.
I have a velocity template, it represents an XML file. I am populating the text between tags using data passed to a VelocityContext object. This is then accessed inside the template.
Here is an example lets call it myTemplate.vm:
<text>$myDocument.text</text>
and this is how I am passing that data to the velocity file and building it to output as a String:
private String buildXml(Document pIncomingXml)
{
// setup environment
Properties lProperties = new Properties();
lProperties.put("file.resource.loader.class", "org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader");
VelocityContext lVelocityContext = new VelocityContext();
lVelocityContext.put("myDocument" , pIncomingXml.getRootElement());
StringWriter lOutput = new StringWriter();
try
{
Velocity.init(lProperties);
Velocity.mergeTemplate("myTemplate.vm", "ISO-8859-1", lVelocityContext, lOutput);
}
catch (Exception lEx)
{
throw new RuntimeException("Problems running velocity template, underlying error is " + lEx.getMessage(), lEx);
}
return lOutput.toString();
}
The problem is that when I access myDocument.text inside the template file it outputs text which is not escaped for XML.
I found a work around for this by also adding a VelocityContext for an escape tool like so:
lVelocityContext.put("esc", new EscapeTool());
then wrapping my tag in the template using it:
<text>$esc.xml($myDocument.text)</text>
The reality is I have a very large template and for me to manually wrap each element in an $esc.xml context will be time consuming. Is there a way that I can tell velocity to escape for XML on access to myDocument without editing the template file at all?
Yes, it's possible.
What you need to do is to use the EscapeXMLReference, which implements the reference insertion handler interface:
lProperties.put("eventhandler.referenceinsertion.class",
"org.apache.velocity.app.event.implement.EscapeXmlReference");
I have files which consist of json elements in an array.
(several file. each file has json array of elements)
I have a process that knows to take each json element as a line from file and process it.
So I created a small program that reads the JSON array, and then writes the elements to another file.
The output of this utility will be the input of the other process.
I used Java 7 NIO (and gson).
I tried to use as much Java 7 NIO as possible.
Is there any improvement I can do?
What about the filter? Which approach is better?
Thanks,
public class TransformJsonsUsers {
public TransformJsonsUsers() {
}
public static void main(String[] args) throws IOException {
final Gson gson = new Gson();
Path path = Paths.get("C:\\work\\data\\resources\\files");
final Path outputDirectory = Paths
.get("C:\\work\\data\\resources\\files\\output");
DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {
#Override
public boolean accept(Path entry) throws IOException {
// which is better?
// BasicFileAttributeView attView = Files.getFileAttributeView(entry, BasicFileAttributeView.class);
// return attView.readAttributes().isRegularFile();
return !Files.isDirectory(entry);
}
};
DirectoryStream<Path> directoryStream = Files.newDirectoryStream(path, filter);
directoryStream.forEach(new Consumer<Path>() {
#Override
public void accept(Path filePath) {
String fileOutput = outputDirectory.toString() + File.separator + filePath.getFileName();
Path fileOutputPath = Paths.get(fileOutput);
try {
BufferedReader br = Files.newBufferedReader(filePath);
User[] users = gson.fromJson(br, User[].class);
BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
for (User user : users) {
writer.append(gson.toJson(user));
writer.newLine();
}
writer.flush();
} catch (IOException e) {
throw new RuntimeException(filePath.toString(), e);
}
}
});
}
}
There is no point of using Filter if you want to read all the files from the directory. Filter is primarily designed to apply some filter criteria and read a subset of files. Both of them may not have any real difference in over all performance.
If you looking to improve performance, you can try couple different approaches.
Multi-threading
Depending on how many files exists in the directory and how powerful your CPU is, you can apply multi threading to process more than one file at a time
Queuing
Right now you are reading and writing to another file synchronously. You can queue content of the file using Queue and create asynchronous writer.
You can combine both of these approaches as well to improve performance further.
Don't put the I/O into the filter. That's not what it's for. You should get the complete list of files and then process it. For example if the I/O creates another file in the directory, the behaviour is undefined. You might miss a file, or see the new file in the accept() method.
I'm a newbie in Spring Batch, and I would appreciate some help to resolve this situation: I read some files with a MultiResourceItemReader, make some marshalling work, in the ItemProcessor I receive a String and return a Map<String, List<String>>, so my problem is that in the ItemWriter I should iterate the keys of the Map and for each one of them generate a new file containing the value associated with that key, can someone point me out in the right direction in order to create the files?
I'm also using a MultiResourceItemWriter because I need to generates files with a maximum of lines.
Thanks in advance
Well, finaly got a solution, I'm not really excited about it but it's working and I don't have much more time, so I've extended the MultiResourceItemWriter and redefined the "write" method, processing the map's elements and writing the files by myself.
In case anyone out there needs it, here it is.
#Override
public void write(List items) throws Exception {
for (Object o : items) {
//do some processing here
writeFile(anotherObject);
}
private void writeFile (AnotherObject anotherObject) throws IOException {
File file = new File("name.xml");
boolean restarted = file.exists();
FileUtils.setUpOutputFile(file, restarted, true, true);
StringBuffer sb = new StringBuffer();
sb.append(xStream.toXML(anotherObject));
FileOutputStream os = new FileOutputStream(file, true);
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(os, Charset.forName("UTF-8")));
bufferedWriter.write(sb.toString());
bufferedWriter.close();
}
And that's it, I want to believe that there is a better option that I don't know, but for the moment this is my solution. If anyone knows how can I enhance my implementation, I'd like to know it.
I'm starting to design an application, that will, in part, run through a directory of files and compare their extensions to their file headers.
Does anyone have any advice as to the best way to approach this? I know I could simply have a lookup table that will contain the file's header signature. e.g., JPEG: \xFF\xD8\xFF\xE0
I was hoping there might be a simper way.
Thanks in advance for your help.
I'm afraid it'll have to be more complicated than that. Not every file type has a header at all, and some (such as RAR) have their characteristic data structures at the end rather than at the beginning.
You may want to take a look at the Unix file command, which does the same job:
http://linux.die.net/man/1/file
http://linux.die.net/man/5/magic
If you don't need to do dirty work on these values (and you don't have linux) you could simply use an external program, like TrID, that is able to do this thing for you.
Maybe you can just work on its output without caring to doing it by yourself.. in anycase if you have just around 20 kinds of files that you will have to manage having a simple lookup table (eg. HashMap<String,byte[]>) is not that bad. Of cours this will work only if desidered file format has a magic number, otherwise you are on your own (or with an external program).
Because of the problem with the missing significant header for some file types (thanks #Michael) I would create a map of extension to a kind of type checker with a simple API like
public interface TypeCheck throws IOException {
public boolean isValid(InputStream data);
}
Now you can code something like
File toBeTested = ...;
Map<String,TypeCheck> typeCheckByExtension = ...;
TypeCheck check = typeCheckByExtension.get(getExtension(toBeTested.getName()));
if (check != null) {
InputStream in = new FileInputStream(toBeTested);
if (check.isValid(in)) {
// process valid file
} else {
// process invalid file
}
in.close();
} else {
// process unknown file
}
The Header check for JPEG for example may look like
public class JpegTypeCheck implements TypeCheck {
private static final byte[] HEADER = new byte[] {0xFF, 0xD8, 0xFF, 0xE0};
public boolean isValid(InputStream data) throws IOException {
byte[] header = new byte[4];
return data.read(header) == 4 && Arrays.equals(header, HEADER);
}
}
For other types with no significant header you can implement completly other type checks.
You can extract the mime type for each file and compare this to a map of mimetype/extension (Map<String, List<String>>, the first String is the mime type, the second is a list of valid extensions).
Resources :
Get the Mime Type from a File
JMimeMagic
On the same topic :
Java - HowTo extract MimeType from a byte[]
Getting A File's Mime Type In Java
You can know the file type of file reading the header using apache tika. Following code need apache tika jar.
InputStream is = MainApp.class.getResourceAsStream("/NetFx20SP1_x64.txt");
BufferedInputStream bis = new BufferedInputStream(is);
AutoDetectParser parser = new AutoDetectParser();
Detector detector = parser.getDetector();
Metadata md = new Metadata();
md.add(Metadata.RESOURCE_NAME_KEY,MainApp.class.getResource("/NetFx20SP1_x64.txt").getPath());
MediaType mediaType = detector.detect(bis, md);
System.out.println("MIMe Type of File : " + mediaType.toString());