I have a Hashmap as a source of data and I'm generating a separate pdf for each key. But when the hashmap length is greater than 2 I'm getting only the 2nd pdf. 2nd one overrides the 1st pdf.(I'm storing HTML in the body key as you can see in the code). I don't want to create separate HTML files and store them in sever.Is there any way where I can store the HTML and generate separate pdf for all keys in hashmap
public byte[] BulkPdf(Map jsonObject) throws ApplicationException, IOException {
LinkedHashMap<String, List<Map<String, Object>>> hashMapDept = new LinkedHashMap<>();
byte[] reportByte = null;
if (!hashMapDept.isEmpty()) {
Integer count = 0;
for (Entry<String, List<Map<String, Object>>> entry : hashMapDept.
entrySet()) {
String body = "";
Writer out = new StringWriter();
Configuration cfg = new Configuration();
try {
cfg.setObjectWrapper(new DefaultObjectWrapper());
String templateStr = UUID.randomUUID().
toString().
replaceAll("-",
"");
logger.debug("templatename --> ",
templateStr);
freemarker.template.Template freemarkerTemplate = new freemarker.template.Template(
templateStr,
new StringReader(html),
cfg);
freemarkerTemplate.process(jsonObject,
out);
body = out.toString();
} catch (TemplateException | IOException | ApplicationException e) {
logger.debug("template pdf exception --> ",
e.getMessage());
out.flush();
e.printStackTrace();
cfg.clearTemplateCache();
} finally {
out.flush();
cfg.clearTemplateCache();
}
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
PdfRendererBuilder builder = new PdfRendererBuilder();
builder.withHtmlContent(body,
"pathforpdf");
builder.useFastMode();
builder.toStream(outputStream);
builder.run();
reportByte = outputStream.toByteArray();
outputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return reportByte;
}
Related
How can I save this kind of map to file? (it should work for an android device too)
I tried:
Properties properties = new Properties();
for (Map.Entry<String, Object[]> entry : map.entrySet()) {
properties.put(entry.getKey(), entry.getValue());
}
try {
properties.store(new FileOutputStream(context.getFilesDir() + MainActivity.FileName), null);
} catch (IOException e) {
e.printStackTrace();
}
And I get:
class java.util.ArrayList cannot be cast to class java.lang.String (java.util.ArrayList and java.lang.String are in module java.base of loader 'bootstrap')
What should I do?
I was writing an answer based on String values serialization when I realized for your error that perhaps some value can be an ArrayList... I honestly don't fully understand the reasoning behind the error (of course, it is a cast, but I don't understand the java.util.ArrayList part)...
In any case, the problem is happening when you try storing your properties and it tries to convert your Object[] to String for saving.
In my original answer I suggested you to manually join your values when generating your file. It is straightforward with the join method of the String class:
Properties properties = new Properties();
for (String key : map.keySet()) {
Object[] values = map.get(key);
// Perform null checking
String value = String.join(",", values);
properties.put(key, value);
}
try {
properties.store(new FileOutputStream(context.getFilesDir() + MainActivity.FileName), null);
} catch (IOException e) {
e.printStackTrace();
}
For reading your values, you can use split:
Properties properties = new Properties();
Map<String, String> map = new HashMap<>();
InputStream in = null;
try {
in = new FileInputStream(context.getFilesDir() + MainActivity.FileName);
properties.load(in);
for (String key : properties.stringPropertyNames()) {
String value = properties.getProperty(k);
// Perform null checking
String[] values = value.split(",");
map.put(key, value);
}
} catch (Throwable t) {
t.printStackTrace();
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
But I think you have a much better approach: please, just use the Java builtin serialization mechanisms to save and restore your information.
For saving your map use ObjectOutputStream:
try (ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(context.getFilesDir() + MainActivity.FileName))){
oos.writeObject(map);
}
You can read your map back as follows:
Map<String, Object> map;
try (ObjectInputStream ois = new ObjectInputStream(new FileInputStream(context.getFilesDir() + MainActivity.FileName))){
map = (Map)ois.readObject();
}
If all the objects stored in your map are Serializables this second approach is far more flexible and it is not limited to String values like the first one.
When I launch the JAR file that I build, JTable does not display UTF-8. When I launch it inside IntelliJ IDEA, it does display correctly.
I have a couple of ideas but cannot find a way to debug it. First of all, the table data is being serialised, maybe there is something I don't know about it and I need to specify somewhere that it is UTF-8?
Secondly, maybe I need to specify somewhere the encoding where I create and populate the JTable as GUI element? I am adding code pieces for both
The code that I use for saving and retrieving data is here:
private void deserialiseStatsData(){
try {
FileInputStream fis = new FileInputStream("tabledata.ser");
ObjectInputStream ois = new ObjectInputStream(fis);
tableData = (TableData) ois.readObject();
fis.close();
ois.close();
} catch (Exception e) {
tableData = new TableData();
}
}
private void serialiseStatsData(){
try {
FileOutputStream fos = new FileOutputStream("tabledata.ser");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(tableData);
oos.close();
fos.close();
}
catch (Exception e) {e.printStackTrace();}
}
Code for creating and populating the table:
private void createUIComponents() {
String headers[] = {"Exercise", "Did Succeed?", "Times Failed Before 1st Success",
"Accuracy (%), ", "Checked Result Before Succeeding"};
TableData dataTable = null;
try {
FileInputStream fis = new FileInputStream("tabledata.ser");
ObjectInputStream ois = new ObjectInputStream(fis);
dataTable = (TableData) ois.readObject();
fis.close();
ois.close();
} catch (Exception e) {
e.printStackTrace();
}
Object[][] data;
int rows = dataTable.getRows();
int cols = headers.length;
data = new Object[rows][cols];
for (int j = 0; j < rows; j++) {
data[j][0] = dataTable.getLine(j).getExercise();
data[j][1] = dataTable.getLine(j).isSuccess();
data[j][2] = dataTable.getLine(j).getNoOfFails();
data[j][3] = dataTable.getLine(j).getPercentage();
data[j][4] = dataTable.getLine(j).isAnswerCheckBeforeSuccess();
}
table = new JTable(data, headers);
}
What I have tried:
Added lines to VM Options:
-Dconsole.encoding=UTF-8
-Dfile.encoding=UTF-8
Changed encoding in Settings -> File Encodings to UTF-8
At this point, I don't know what else I could do.
Edited:
I get stuff like this in JTable:
When I should get stuff like this:
Char list which does not display:
union = '\u222A';
intersection = '\u2229';
product = '\u2A2F';
difference = '\u2216';
infinity = '\u221E';
belongsTo = '\u2208';
weakSubset = '\u2286';
properSubset = '\u2282';
I'm attempting to merge multiple byte arrays to a single PDF and have that working, but it seems that the file grows larger and larger when it should just replace the file. It also seems that the file is not closing correctly. I don't know if I am missing a list in the merge logic, but that's the only place I could think it would be.
public class MergePDF {
private static final Logger LOGGER = Logger.getLogger(MergePDF.class);
private static ByteArrayOutputStream baos = new ByteArrayOutputStream();
public static byte[] mergePDF (List<byte[]> pdfList) {
try {
Document PDFCombo = new Document();
PdfSmartCopy copyCombo = new PdfSmartCopy(PDFCombo, baos);
PDFCombo.open();
PdfReader readInputPdf = null;
int num_of_pages = 0;
for (int i = 0; i < pdfList.size(); i++) {
readInputPdf = new PdfReader(pdfList.get(i));
num_of_pages = readInputPdf.getNumberOfPages();
for (int page = 0 ; page < num_of_pages;) {
copyCombo.addPage(copyCombo.getImportedPage(readInputPdf, ++page));
}
}
PDFCombo.close();
} catch (Exception e) {
LOGGER.error(e);
}
return baos.toByteArray();
}
}
I assume that I'm missing some sort of close in the process because when I save the file at a later time, it seems to tack on the size, but the PDF that I view doesn't add any additional pages.
Here is how I am saving the PDF off before sending it to a third party. When sent it is a byte array.
try {
FileOutputStream out = new FileOutputStream(outMessage.getDocumentTitle());
out.write(outMessage.getPayload());
out.close();
} catch (FileNotFoundException e) {
return null;
} catch (IOException e) {
return null;
}
I have been told that there are multiple PDF Headers and EOF's in the bytearray that I'm sending over.
Based on #mkl's comment, it turns out that this was indeed true:
#mkl's suggestion in combination with your mention of the "close"
issue could indeed indicate that you're not replacing one set of bytes
by another set of bytes, but that you are, in fact, adding a new set
of bytes to the old set of bytes.
This is how to solve the problem:
public class MergePDF {
private static final Logger LOGGER = Logger.getLogger(MergePDF.class);
public static byte[] mergePDF (List<byte[]> pdfList) {
try {
Document PDFCombo = new Document();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PdfSmartCopy copyCombo = new PdfSmartCopy(PDFCombo, baos);
PDFCombo.open();
PdfReader readInputPdf = null;
int num_of_pages = 0;
for (int i = 0; i < pdfList.size(); i++) {
readInputPdf = new PdfReader(pdfList.get(i));
num_of_pages = readInputPdf.getNumberOfPages();
for (int page = 0 ; page < num_of_pages;) {
copyCombo.addPage(copyCombo.getImportedPage(readInputPdf, ++page));
}
}
PDFCombo.close();
} catch (Exception e) {
LOGGER.error(e);
}
return baos.toByteArray();
}
}
Now you probably understand why the person teaching you how to code explained that using static variables is a bad idea in most cases.
I cant understand why my code are not running all the time.
I am opening a jasper report but for first 4 opening times the report is cached or code are not executing (Code in the new StreamResource are not executing first 4 times). new StreamResource.StreamSource() are running only at 5 time WHY ? The first 4 times i got the old,cached,temp or i event dont know what a pdf file with old params.
maybe someone know the issue ?
public static void open(final String fileName, final HashMap<String, Object> data ) {
mylog.pl("### Param's print # open Report: Filename:" + fileName);
try {
Iterator<?> i = data.keySet().iterator();
while (i.hasNext()) {
String id = i.next().toString();
String value = (data.get(id) != null) ? data.get(id).toString() : "null";
mylog.pl(" id: " + id + " value: " + value);
}
} catch (Exception e) {
e.printStackTrace();
mylog.pl(e.getMessage());
}
StreamResource.StreamSource source = null;
source = new StreamResource.StreamSource() {
public InputStream getStream() {
byte[] b = null;
InputStream reportStream = null;
try {
reportStream = new BufferedInputStream(new FileInputStream(PATH + fileName + JASPER));
b = JasperRunManager.runReportToPdf(reportStream, data, new JREmptyDataSource());
} catch (JRException ex) {
ex.printStackTrace();
mylog.pl("Err # JR" + ex.getMessage());
} catch (FileNotFoundException e) {
e.printStackTrace();
Utils.showMessage(SU.NOTFOUND);
return null;
}
return new ByteArrayInputStream(b);
}
};
StreamResource resource = null;
resource = new StreamResource(source, fileName + PDF);
resource.setMIMEType("application/pdf");
Page p = Page.getCurrent();
p.open(resource, "Report", false);
}
Here is the answer
I all the time used resource.setCacheTime(0); but really needed resource.setCacheTime(1000); because
In theory <= 0 disables caching. In practice Chrome, Safari (and,
apparently, IE) all ignore <=0.
I am currently writing a Java application to retrieve BLOB type data from the database and I use a query to get all the data and put them in a List of Map<String, Object> where the columns are stored. When I need to use the data I iterate the list to get the information.
However I got an OutOfMemoryError when I tried to get the list of rows more than a couple times. Do I need to release the memory in the codes? My code is as follows:
ByteArrayInputStream binaryStream = null;
OutputStream out = null;
try {
List<Map<String, Object>> result =
jdbcOperations.query(
sql,
new Object[] {id},
new RowMapper(){
public Object mapRow(ResultSet rs, int i) throws SQLException {
DefaultLobHandler lobHandler = new DefaultLobHandler();
Map<String, Object> results = new HashMap<String, Object>();
String fileName = rs.getString(ORIGINAL_FILE_NAME);
if (!StringUtils.isBlank(fileName)) {
results.put(ORIGINAL_FILE_NAME, fileName);
}
byte[] blobBytes = lobHandler.getBlobAsBytes(rs, "AttachedFile");
results.put(BLOB, blobBytes);
int entityID = rs.getInt(ENTITY_ID);
results.put(ENTITY_ID, entityID);
return results;
}
}
);
int count = 0;
for (Iterator<Map<String, Object>> iterator = result.iterator();
iterator.hasNext();)
{
count++;
Map<String, Object> row = iterator.next();
byte[] attachment = (byte[])row.get(BLOB);
final int entityID = (Integer)row.get(ENTITY_ID);
if( attachment != null) {
final String originalFilename = (String)row.get(ORIGINAL_FILE_NAME);
String stripFilename;
if (originalFilename.contains(":\\")) {
stripFilename = StringUtils.substringAfter(originalFilename, ":\\");
}
else {
stripFilename = originalFilename;
}
String filename = pathName + entityID + "\\"+ stripFilename;
boolean exist = (new File(filename)).exists();
iterator.remove(); // release the resource
if (!exist) {
binaryStream = new ByteArrayInputStream(attachment);
InputStream extractedStream = null;
try {
extractedStream = decompress(binaryStream);
final byte[] buf = IOUtils.toByteArray(extractedStream);
out = FileUtils.openOutputStream(new File(filename));
IOUtils.write(buf, out);
}
finally {
IOUtils.closeQuietly(extractedStream);
}
}
else {
continue;
}
}
}
}
catch (FileNotFoundException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
finally {
IOUtils.closeQuietly(out);
IOUtils.closeQuietly(binaryStream);
}
Consider reorganizing your code so that you don't keep all the blobs in memory at once. Instead of putting them all in a results map, output each one as you retrieve it.
The advice about expanding your memory settings is good also.
You there are also command-line parameters you can use for tuning memory, for example:
-Xms128m -Xmx1024m -XX:MaxPermSize=256m
Here's a good link on using JConsole to monitor a Java application:
http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html
Your Java Virtual Machine probably isn't using all the memory it could. You can configure it to get more from the OS (see How can I increase the JVM memory?). That would be a quick and easy fix. If you still run out of memory, look at your algorithm -- do you really need all those BLOBs in memory at once?