I'm attempting to merge multiple byte arrays to a single PDF and have that working, but it seems that the file grows larger and larger when it should just replace the file. It also seems that the file is not closing correctly. I don't know if I am missing a list in the merge logic, but that's the only place I could think it would be.
public class MergePDF {
private static final Logger LOGGER = Logger.getLogger(MergePDF.class);
private static ByteArrayOutputStream baos = new ByteArrayOutputStream();
public static byte[] mergePDF (List<byte[]> pdfList) {
try {
Document PDFCombo = new Document();
PdfSmartCopy copyCombo = new PdfSmartCopy(PDFCombo, baos);
PDFCombo.open();
PdfReader readInputPdf = null;
int num_of_pages = 0;
for (int i = 0; i < pdfList.size(); i++) {
readInputPdf = new PdfReader(pdfList.get(i));
num_of_pages = readInputPdf.getNumberOfPages();
for (int page = 0 ; page < num_of_pages;) {
copyCombo.addPage(copyCombo.getImportedPage(readInputPdf, ++page));
}
}
PDFCombo.close();
} catch (Exception e) {
LOGGER.error(e);
}
return baos.toByteArray();
}
}
I assume that I'm missing some sort of close in the process because when I save the file at a later time, it seems to tack on the size, but the PDF that I view doesn't add any additional pages.
Here is how I am saving the PDF off before sending it to a third party. When sent it is a byte array.
try {
FileOutputStream out = new FileOutputStream(outMessage.getDocumentTitle());
out.write(outMessage.getPayload());
out.close();
} catch (FileNotFoundException e) {
return null;
} catch (IOException e) {
return null;
}
I have been told that there are multiple PDF Headers and EOF's in the bytearray that I'm sending over.
Based on #mkl's comment, it turns out that this was indeed true:
#mkl's suggestion in combination with your mention of the "close"
issue could indeed indicate that you're not replacing one set of bytes
by another set of bytes, but that you are, in fact, adding a new set
of bytes to the old set of bytes.
This is how to solve the problem:
public class MergePDF {
private static final Logger LOGGER = Logger.getLogger(MergePDF.class);
public static byte[] mergePDF (List<byte[]> pdfList) {
try {
Document PDFCombo = new Document();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PdfSmartCopy copyCombo = new PdfSmartCopy(PDFCombo, baos);
PDFCombo.open();
PdfReader readInputPdf = null;
int num_of_pages = 0;
for (int i = 0; i < pdfList.size(); i++) {
readInputPdf = new PdfReader(pdfList.get(i));
num_of_pages = readInputPdf.getNumberOfPages();
for (int page = 0 ; page < num_of_pages;) {
copyCombo.addPage(copyCombo.getImportedPage(readInputPdf, ++page));
}
}
PDFCombo.close();
} catch (Exception e) {
LOGGER.error(e);
}
return baos.toByteArray();
}
}
Now you probably understand why the person teaching you how to code explained that using static variables is a bad idea in most cases.
Related
I use the code below to generate the zip file and return it to the frontend, The performance is normal when the file is small. However, when the compressed file exceeds 1GB, the downloaded file will be conrrupted, and the number of compressed files is reduced, and this phenomenon does not always occur, sometimes downloading 3.2GB files is normal again =_=
Looking forward to hearing from everyone
outputStream = response.getOutputStream();
ZipOutputStream zipOutStream = null;
FileInputStream filenputStream = null;
BufferedInputStream bis = null;
try {
zipOutStream = new ZipOutputStream(new BufferedOutputStream(outputStream));
zipOutStream.setMethod(ZipOutputStream.DEFLATED);
for (int i = 0; i < files.size(); i++) {
File file = files.get(i);
filenputStream = new FileInputStream(file);
bis = new BufferedInputStream(filenputStream);
zipOutStream.putNextEntry(new ZipEntry(fileName[i]));
int len = 0;
byte[] bs = new byte[40];
while ((len = bis.read(bs)) != -1) {
zipOutStream.write(bs,0,len);
}
bis.close();
filenputStream.close();
zipOutStream.closeEntry();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if(Objects.nonNull(filenputStream)) {
filenputStream.close();
}
if(Objects.nonNull(bis)) {
bis.close();
}
if (Objects.nonNull(zipOutStream)) {
zipOutStream.flush();
zipOutStream.close();
}
if (Objects.nonNull(outputStream)) {
outputStream.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
One of the potential issues with your code is with memory leaks and streams that are not properly flushed and closed if the code runs in to any exceptions.
One way to help reduce these leaks and to ensure all resources are properly closed, even when exceptions occur, is to use try-with-resources.
Here is an example of how your code example can be rewritten to utilize try-with-resources.
In this example, to get it to compile in a detached IDE environment, I've put it in a function named testStream since I do not have knowledge as to what contributes to the source of the parameters files and fileName. Therefore, validation and content of these two parameters are left to chance, and it is assumed both have the same number of entries, and are paired with each other (element 0 of each are paired together). Since this odd relationship exists, it's linked by the variable named i to mirror the original source code.
public void testStream( HttpServletResponse response, List<File> files, String[] fileName )
{
int i = 0;
try (
ServletOutputStream outputStream = response.getOutputStream();
ZipOutputStream zipOutStream = new ZipOutputStream( new BufferedOutputStream( outputStream ) );
)
{
zipOutStream.setMethod( ZipOutputStream.DEFLATED );
for ( File file : files )
{
try (
BufferedInputStream bis = new BufferedInputStream(
new FileInputStream( file ) );
) {
zipOutStream.putNextEntry( new ZipEntry( fileName[i++] ) );
int len = 0;
byte[] bs = new byte[4096];
while ( (len = bis.read( bs )) != -1 )
{
zipOutStream.write( bs, 0, len );
}
// optional since putNextEntry will auto-close open entries:
zipOutStream.closeEntry();
}
}
}
catch ( IOException e )
{
e.printStackTrace();
}
}
This compiles successfully and should work; but may need some other adjustments to get it to work in your environment with your data. Hopefully it eliminates issues you may have been having with closing streams and other resources.
I have a program who read information from database. Sometimes, the message is bigger that expected. So, before send that message to my broker I zip it with this code:
public static byte[] zipBytes(byte[] input) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(input.length);
OutputStream ej = new DeflaterOutputStream(bos);
ej.write(input);
ej.close();
bos.close();
return bos.toByteArray();
}
Recently, i retrieved a 80M message from DB and when execute my code above just OutOfMemory throw on the "ByteArrayOutputStream" line. My java program just have 512 of memory to do all process and cant give it more memory.
How can I solve this?
This is no a duplicate question. I cant increase heap size memory.
EDIT:
This is flow of my java code:
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()){
byte[] bt = rs.getBytes("data");
if(bt.length > 60) { //only rows with content > 60. I need to know the size of message, so if I use rs.getBinaryStream I cannot know the size, can I?
if(bt.length >= 10000000){
//here I need to zip the bytearray before send it, so
bt = zipBytes(bt); //this method is above
// then publish bt to my broker
} else {
//here publish byte array to my broker
}
}
}
EDIT
Ive tried with PipedInputStream and the memory that process consume is same as zipBytes(byte[] input) method.
private InputStream zipInputStream(InputStream in) throws IOException {
PipedInputStream zipped = new PipedInputStream();
PipedOutputStream pipe = new PipedOutputStream(zipped);
new Thread(
() -> {
try(OutputStream zipper = new DeflaterOutputStream(pipe)){
IOUtils.copy(in, zipper);
zipper.flush();
} catch (IOException e) {
IOUtils.closeQuietly(zipped); // close it on error case only
e.printStackTrace();
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(zipped);
IOUtils.closeQuietly(pipe);
}
}
).start();
return zipped;
}
How can I compress by Deflate my InputStream?
EDIT
That information is sent to JMS in Universal Messaging Server by Software AG. This use a Nirvana client documentation: https://documentation.softwareag.com/onlinehelp/Rohan/num10-2/10-2_UM_webhelp/um-webhelp/Doc/java/classcom_1_1pcbsys_1_1nirvana_1_1client_1_1n_consume_event.html and data is saved in nConsumeEvent objects and the documentation show us only 2 ways to send that information:
nConsumeEvent (String tag, byte[] data)
nConsumeEvent (String tag, Document adom)
https://documentation.softwareag.com/onlinehelp/Rohan/num10-5/10-5_UM_webhelp/index.html#page/um-webhelp%2Fco-publish_3.html%23
The code for connection is:
nSessionAttributes nsa = new nSessionAttributes("nsp://127.0.0.1:9000");
MyReconnectHandler rhandler = new MyReconnectHandler();
nSession mySession = nSessionFactory.create(nsa, rhandler);
if(!mySession.isConnected()){
mySession.init();
}
nChannelAttributes chaAtt = new nChannelAttributes();
chaAttr.setName("mychannel"); //This is a topic
nChannel myChannel = mySession.findChannel(chaAtt);
List<nConsumeEvent> messages = new ArrayList<nConsumeEvent>();
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo");
while(rs.next){
byte[] bt = rs.getBytes("data");
if(bt.length > 60){
nEventProperties prop = new nEventProperties();
if(bt.length > 10000000){
bt = compressData(bt); //here a need compress data without ByteArrayInputStream
prop.put("isZip", "true");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
} else {
prop.put("isZip", "false");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
ntransactionAttributes tatrib = new nTransactionAttributes(myChannel);
nTransaction myTransaction = nTransactionFactory.create(tattrib);
Vector<nConsumeEvent> m = new Vector<nConsumeEvent>(messages);
myTransaction.publish(m);
myTransaction.commit();
}
Because API exection, to the end of the day I need send the information in byte array, but if this is the only one byte array in my code is OK. How can I compress the byte array or InputStream with rs.getBinaryStream() in this implementation?
EDIT
The database server used is PostgreSQL v11.6
EDIT
I've applied the first one solution of #VGR and works fine.
Only one thing, SELECT query is in a while(true) like:
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
// all that implementation you know for this entire post
Thread.sleep(10000);
}
So, a SELECT is execute each 3 second.
But I've done a test with my program running and the memory just increase in each process. Why? If the information that database return is same in each request, should not the memory keep like first request? Or maybe I forgot close a stream?
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()) {
//byte[] bt = rs.getBytes("data");
byte[] bt;
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(this.zip+1);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= this.zip);
break;
}
}
if (sendToBroker) {
nEventProperties prop = new nEventProperties();
// Rewind stream
source.reset();
if (needCompression) {
System.out.println("Tamaño del mensaje mayor al esperado. Comprimiendo mensaje");
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
IOUtils.copy(source, brokerStream);
}
bt = byteStream.toByteArray();
prop.put("zip", "true");
} else {
bt = IOUtils.toByteArray(source);
}
System.out.println("size: "+bt.length);
prop.put("host", this.host);
nConsumeEvent ncon = new nConsumeEvent(""+rs.getInt("xid"), bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
}
}
For example, this is the heap memory in two times. First one memory use above 500MB and second one (with the same information of database) used above 1000MB
rs.getBytes("data") reads the entire 80 megabytes into memory at once. In general, if you are reading a potentially large amount of data, you don’t want to try to keep it all in memory.
The solution is to use getBinaryStream instead.
Since you need to know whether the total size is larger than 10,000,000 bytes, you have two choices:
Use a BufferedInputStream with a buffer of at least that size, which will allow you to use mark and reset in order to “rewind” the InputStream.
Read the data size as part of your query. You may be able to do this by using a Blob or using a function like LENGTH.
The first approach will use up 10 megabytes of program memory for the buffer, but that’s better than hundreds of megabytes:
while (rs.next()) {
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(10_000_001);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= 10_000_000);
break;
}
}
if (sendToBroker) {
// Rewind stream
source.reset();
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Notice that no byte arrays and no ByteArrayOutputStreams are used. The actual data is not kept in memory, except for the 10 megabyte buffer.
The second approach is shorter, but I’m not sure how portable it is across databases:
while (rs.next()) {
Blob data = rs.getBlob("data");
long length = data.length();
if (length > 60) {
try (InputStream source = data.getBinaryStream()) {
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Both approaches assume there is some API available for your “broker” which allows the data to be written to an OutputStream. I’ve assumed, for the sake of example, that it’s broker.getOutputStream().
Update
It appears you are required to create nConsumeEvent objects, and that class only allows its data to be specified as a byte array in its constructors.
That byte array is unavoidable, obviously. And since there is no way to know the exact number of bytes a compressed version will require, a ByteArrayOutputStream is also unavoidable. (It’s possible to avoid using that class, but the replacement would be essentially a reimplementation of ByteArrayOutputStream.)
But you can still read the data as an InputStream in order to reduce your memory usage. And when you aren’t compressing, you can still avoid ByteArrayOutputStream, thereby creating only one additional byte array.
So, instead of this, which is not possible for nConsumeEvent:
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
It should be:
byte[] bt;
if (needCompression) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
} else {
bt = source.readAllBytes();
}
prop.put("isZip", String.valueOf(needCompression));
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Similarly, the second example should replace this:
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
with this:
byte[] bt;
if (length >= 10000000) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
prop.put("isZip", "true");
} else {
bt = source.readAllBytes();
prop.put("isZip", "false");
}
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
When I launch the JAR file that I build, JTable does not display UTF-8. When I launch it inside IntelliJ IDEA, it does display correctly.
I have a couple of ideas but cannot find a way to debug it. First of all, the table data is being serialised, maybe there is something I don't know about it and I need to specify somewhere that it is UTF-8?
Secondly, maybe I need to specify somewhere the encoding where I create and populate the JTable as GUI element? I am adding code pieces for both
The code that I use for saving and retrieving data is here:
private void deserialiseStatsData(){
try {
FileInputStream fis = new FileInputStream("tabledata.ser");
ObjectInputStream ois = new ObjectInputStream(fis);
tableData = (TableData) ois.readObject();
fis.close();
ois.close();
} catch (Exception e) {
tableData = new TableData();
}
}
private void serialiseStatsData(){
try {
FileOutputStream fos = new FileOutputStream("tabledata.ser");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(tableData);
oos.close();
fos.close();
}
catch (Exception e) {e.printStackTrace();}
}
Code for creating and populating the table:
private void createUIComponents() {
String headers[] = {"Exercise", "Did Succeed?", "Times Failed Before 1st Success",
"Accuracy (%), ", "Checked Result Before Succeeding"};
TableData dataTable = null;
try {
FileInputStream fis = new FileInputStream("tabledata.ser");
ObjectInputStream ois = new ObjectInputStream(fis);
dataTable = (TableData) ois.readObject();
fis.close();
ois.close();
} catch (Exception e) {
e.printStackTrace();
}
Object[][] data;
int rows = dataTable.getRows();
int cols = headers.length;
data = new Object[rows][cols];
for (int j = 0; j < rows; j++) {
data[j][0] = dataTable.getLine(j).getExercise();
data[j][1] = dataTable.getLine(j).isSuccess();
data[j][2] = dataTable.getLine(j).getNoOfFails();
data[j][3] = dataTable.getLine(j).getPercentage();
data[j][4] = dataTable.getLine(j).isAnswerCheckBeforeSuccess();
}
table = new JTable(data, headers);
}
What I have tried:
Added lines to VM Options:
-Dconsole.encoding=UTF-8
-Dfile.encoding=UTF-8
Changed encoding in Settings -> File Encodings to UTF-8
At this point, I don't know what else I could do.
Edited:
I get stuff like this in JTable:
When I should get stuff like this:
Char list which does not display:
union = '\u222A';
intersection = '\u2229';
product = '\u2A2F';
difference = '\u2216';
infinity = '\u221E';
belongsTo = '\u2208';
weakSubset = '\u2286';
properSubset = '\u2282';
I am trying to query from database with Java JDBC and compress data in one column to gzip file in specific directory. I have tested my JDBC query and it working fine, but the Gzip code not going with the while loop, it's run with the loop firt row and stuck there. Why it's stuck? help me please!
These folders already existed: D:\Directory\My\year\id1\id2
//Some query JDBC code here, it's work well. I query all rows Data, year, id1,id2,id3
while (myRs1.next()){
String str = Data;
File myGzipFile = new File("D:\\Directory\\My\\"+year+"\\"+id1+"\\"+id2+"\\"+id3+".gzip");
GZIPOutputStream gos = null;
InputStream is = new ByteArrayInputStream(str.getBytes());
gos = new GZIPOutputStream(new FileOutputStream(myGzipFile));
byte[] buffer = new byte[1024];
int len;
while ((len = is.read(buffer)) != -1) {
gos.write(buffer, 0, len);
System.out.print("done for:"+id3);
}
try { gos.close(); } catch (IOException e) { }
}
Try formatting the source like this to catch exceptions.
public class InputStreamDemo {
public static void main(String[] args) throws Exception {
InputStream is = null;
int i;
char c;
try{
is = new FileInputStream("C://test.txt");
System.out.println("Characters printed:");
// reads till the end of the stream
while((i=is.read())!=-1)
{
// converts integer to character
c=(char)i;
// prints character
System.out.print(c);
}
}catch(Exception e){
// if any I/O error occurs
e.printStackTrace();
}finally{
// releases system resources associated with this stream
if(is!=null)
is.close();
}
}
}
Please have a look at the following code
package normal;
import java.io.*;
import java.util.*;
public class ReGenerateFiles
{
private File inputFolder, outputFolder;
private String outputFormat, nameOfTheFile;
private FileInputStream fis;
private FileOutputStream fos;
List <Byte> al;
private byte[] buffer;
private int blockNumber = 1;
public ReGenerateFiles(File inputFile, File outputFile, String nameOfTheFile, String outputFormat)
{
this.inputFolder = inputFile;
this.outputFolder = outputFile;
this.nameOfTheFile = nameOfTheFile;
this.outputFormat = outputFormat;
}
public void reGenerate() throws IOException
{
File file[] = inputFolder.listFiles();
for(int i=0;i<file.length;i++)
{
System.out.println(file[i]);
}
for(int i=0;i<file.length;i++)
{
try
{
buffer = new byte[5000000];
int read = fis.read(buffer, 0, buffer.length);
fis.close();
writeBlock();
}
catch(IOException io)
{
io.printStackTrace();
}
finally
{
fis.close();
fos.close();
}
}
}
private void writeBlock()throws IOException
{
try
{
fos = new FileOutputStream(outputFolder+"/"+nameOfTheFile+"."+outputFormat,true);
fos.write(buffer, 0, buffer.length);
fos.flush();
fos.close();
}
catch(Exception e)
{
fos.close();
e.printStackTrace();
}
finally
{
fos.close();
}
}
}
In here, I am trying to re-assemble a splited file, back to its original file. This is how I split it (The selected answer)
Now, when I am trying to re-assemble it, I am getting an error saying something similar to "Cannot access kalimba.mp3. It is used by another program". This happens after executing 93 splitted files (there are 200 more). Why it is happening, even though I have make sure the streams are closed inside finally block?
Another question, I have assigned 500000 as the byte array value. I did this because I failed to set the byte array according to the original size of the particular file which is about to process. I assigned the byte array value as
buffer = new byte[(byte)file[i].length();
before, but it didn't work.
Please help me to solve these two issues and re-assemble the splitted file back, correctly.
That is because the buffer never fitted to the size of the actual content. It is always 50000, and the actually size vary. Thats the issue