I am trying to generate an ISO file with JIIC library:
https://github.com/stephenc/java-iso-tools
The thing is that I need to know the exact amount of the generated (to be) iso file after adding a file. (I need to have a restriction on the generated iso file size and if exceeds, to generate a new iso file).
I am trying:
long currentIsoSize;
var numOfCreatedIsos = 1;
var root = new ISO9660RootDirectory();
for (var i = 0; i<listFilesInPdfDir.length; i++) {
if (listFilesInPdfDir[i].isDirectory()) {
currentIsoSize = generateIso(event, numOfCreatedIsos, root, listFilesInPdfDir[i]);
if (maxIsoFileSizeExceeds(currentIsoSize)) {
root.getDirectories().remove(root.getDirectories().get(root.getDirectories().size()-1));
generateIso(event, numOfCreatedIsos, root, null);
root = new ISO9660RootDirectory();
numOfCreatedIsos++;
i--;
}
} else {
log.warn("File: {} skipped", listFilesInPdfDir[i]);
}
}
private long generateIso(ArchiveEventTrigger event, int numOfCreatedIsos, ISO9660RootDirectory root, File file) throws HandlerException, FileNotFoundException {
if (file != null) {
var directory = root.addDirectory(file);
for (var pdf : Objects.requireNonNull(file.listFiles())) {
directory.addFile(pdf);
}
}
var isoName = ArchiveUtils.buildIsoName(event, numOfCreatedIsos);
var isoFile = new File("DIR_TO_ISO"+isoName);
var handler = new ISOImageFileHandler(isoFile);
var iso = new CreateISO(handler, root);
iso.process(Utils.isoConfig("daily-isos-"+numOfCreatedIsos), null, null, null);
return new File("DIR_TO_ISO"+isoName).length();
}
The thing is that it creates the appropriate ISO but:
The file size is not the expected one (files: 70MB - iso: 60MB(about the size of the first directory inserted to iso)
Only the first directory has valid files, the other directories have the files but are corrupted.
I noticed that it is happening because of the iso.process call for the same iso file.
Any suggestions?
Related
I am trying to merge 2 docx files which has their own bullet number, after merging of word docs the bullets are automatically updated.
E.g:
Doc A has 1 2 3
Doc B has 1 2 3
After merging the bullet numbering are updated to be 1 2 3 4 5 6
how to stop this.
I am using following code
if(counter==1)
{
FirstFileByteStream = org.apache.commons.codec.binary.Base64.decodeBase64(strFileData.getBytes());
FirstFileIS = new java.io.ByteArrayInputStream(FirstFileByteStream);
FirstWordFile = org.docx4j.openpackaging.packages.WordprocessingMLPackage.load(FirstFileIS);
main = FirstWordFile.getMainDocumentPart();
//Add page break for Table of Content
main.addObject(objBr);
if (htmlCode != null) {
main.addAltChunk(org.docx4j.openpackaging.parts.WordprocessingML.AltChunkType.Html,htmlCode.toString().getBytes());
}
//Table of contents - End
}
else
{
FileByteStream = org.apache.commons.codec.binary.Base64.decodeBase64(strFileData.getBytes());
FileIS = new java.io.ByteArrayInputStream(FileByteStream);
byte[] bytes = IOUtils.toByteArray(FileIS);
AlternativeFormatInputPart afiPart = new AlternativeFormatInputPart(new PartName("/part" + (chunkCount++) + ".docx"));
afiPart.setContentType(new ContentType(CONTENT_TYPE));
afiPart.setBinaryData(bytes);
Relationship altChunkRel = main.addTargetPart(afiPart);
CTAltChunk chunk = Context.getWmlObjectFactory().createCTAltChunk();
chunk.setId(altChunkRel.getId());
main.addObject(objBr);
htmlCode = new StringBuilder();
htmlCode.append("<html>");
htmlCode.append("<h2><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><p style=\"font-family:'Arial Black'; color: #f35b1c\">"+ReqName+"</p></h2>");
htmlCode.append("</html>");
if (htmlCode != null) {
main.addAltChunk(org.docx4j.openpackaging.parts.WordprocessingML.AltChunkType.Html,htmlCode.toString().getBytes());
}
//Add Page Break before new content
main.addObject(objBr);
//Add new content
main.addObject(chunk);
}
Looking at your code, you are adding HTML altChunks to your document.
For these to display it Word, the HTML is converted to normal docx content.
An altChunk is usually converted by Word when you open the docx.
(Alternatively, docx4j-ImportXHTML can do it for an altChunk of type XHTML)
The upshot is that what happens with the bullets (when Word converts your HTML) is largely outside your control. You could experiment with CSS but I think Word will mostly ignore it.
An alternative may be to use XHTML altChunks, and have docx4j-ImportXHTML convert them. main.convertAltChunks()
If the same problem occurs when you try that, well, at least we can address it.
I was able to fix my issue using following code. I found it at (http://webapp.docx4java.org/OnlineDemo/forms/upload_MergeDocx.xhtml). You can also generate your custom code, they have a nice demo where they generate code according to your requirement :).
public final static String DIR_IN = System.getProperty("user.dir")+ "/";
public final static String DIR_OUT = System.getProperty("user.dir")+ "/";
public static void main(String[] args) throws Exception
{
String[] files = {"part1docx_20200717t173750539gmt.docx", "part1docx_20200717t173750539gmt (1).docx", "part1docx_20200717t173750539gmt.docx"};
List blockRanges = new ArrayList();
for (int i=0 ; i< files.length; i++) {
BlockRange block = new BlockRange(WordprocessingMLPackage.load(new File(DIR_IN + files[i])));
blockRanges.add( block );
block.setStyleHandler(StyleHandler.RENAME_RETAIN);
block.setNumberingHandler(NumberingHandler.ADD_NEW_LIST);
block.setRestartPageNumbering(false);
block.setHeaderBehaviour(HfBehaviour.DEFAULT);
block.setFooterBehaviour(HfBehaviour.DEFAULT);
block.setSectionBreakBefore(SectionBreakBefore.NEXT_PAGE);
}
// Perform the actual merge
DocumentBuilder documentBuilder = new DocumentBuilder();
WordprocessingMLPackage output = documentBuilder.buildOpenDocument(blockRanges);
// Save the result
SaveToZipFile saver = new SaveToZipFile(output);
saver.save(DIR_OUT+"OUT_MergeWholeDocumentsUsingBlockRange.docx");
}
i have this code below, but it is not efficient at all, it is very very slow and more pictures i have to compare more long time it takes.
For example i have 500 pictures, each process lasts 2 minutes, 500 x 2 min =1000 min !
the specificity is as soon as there is picture same as compared, move it to another folder. then retrieve the rest files to compare i++
any idea ?
public static void main(String[] args) throws IOException {
String PicturesFolderPath=null;
String removedFolderPath=null;
String pictureExtension=null;
if(args.length>0) {
PicturesFolderPath=args[0];
removedFolderPath=args[1];
pictureExtension=args[2];
}
if(StringUtils.isBlank(pictureExtension)) {
pictureExtension="jpg";
}
if(StringUtils.isBlank(removedFolderPath)) {
removedFolderPath=Paths.get(".").toAbsolutePath().normalize().toString()+"/removed";
}
if(StringUtils.isBlank(PicturesFolderPath)) {
PicturesFolderPath=Paths.get(".").toAbsolutePath().normalize().toString();
}
System.out.println("path to find pictures folder "+PicturesFolderPath);
System.out.println("path to find removed pictures folder "+removedFolderPath);
Collection<File> fileList = FileUtils.listFiles(new File(PicturesFolderPath), new String[] { pictureExtension }, false);
System.out.println("there is "+fileList.size()+" files founded with extention "+pictureExtension);
Iterator<File> fileIterator=fileList.iterator();
//Iterator<File> loopFileIterator=fileList.iterator();
File dest=new File(removedFolderPath);
while(fileIterator.hasNext()) {
File file=fileIterator.next();
System.out.println("process image :"+file.getName());
//each new iteration we retrieve the files staying
Collection<File> list = FileUtils.listFiles(new File(PicturesFolderPath), new String[] { pictureExtension }, false);
for(File f:list) {
if(compareImage(file,f) && !file.getName().equals(f.getName()) ) {
String filename=file.getName();
System.out.println("file :"+file.getName() +" equal to "+f.getName()+" and will be moved on removed folder");
File existFile=new File(removedFolderPath+"/"+file.getName());
if(existFile.exists()) {
existFile.delete();
}
FileUtils.moveFileToDirectory(file, dest, false);
fileIterator.remove();
System.out.println("file :"+filename+" removed");
break;
}
}
}
}
// This API will compare two image file //
// return true if both image files are equal else return false//**
public static boolean compareImage(File fileA, File fileB) {
try {
// take buffer data from botm image files //
BufferedImage biA = ImageIO.read(fileA);
DataBuffer dbA = biA.getData().getDataBuffer();
int sizeA = dbA.getSize();
BufferedImage biB = ImageIO.read(fileB);
DataBuffer dbB = biB.getData().getDataBuffer();
int sizeB = dbB.getSize();
// compare data-buffer objects //
if(sizeA == sizeB) {
for(int i=0; i<sizeA; i++) {
if(dbA.getElem(i) != dbB.getElem(i)) {
return false;
}
}
return true;
}
else {
return false;
}
}
catch (Exception e) {
e.printStackTrace();
return false;
}
}
The already mentioned answer should help you a bit, as considering the width and height of a picture should exclude more candidate pairs quickly.
However, you still have a big problem: For every new file, you read all old files. The number of comparisons grows quadratically and with doing ImageIO.read for every step, it simply must be slow.
You need some fingerprints, which can be compared very fast. You can't use fingerprinting over the whole file content as its infested by the metadata, but you can fingerprint the image data alone.
Just iterate over the image data of a file (like you do), and compute e.g., MD5 hash of it. Store it e.g., as a String in HashSet and you'll get a very fast lookup.
Some untested code
For every image file you want to compare, you compute (using Guava's hashing)
HashCode imageFingerprint(File file) {
Hasher hasher = Hashing.md5().newHasher();
BufferedImage image = ImageIO.read(file);
DataBuffer buffer = image.getData().getDataBuffer();
int size = buffer.getSize();
for(int i=0; i<size; i++) {
hasher.putInt(buffer.getElem(i));
}
return hasher.hash();
}
The computation works with the image data only, just like compareImage in the question, so the metadata get ignored.
Instead of searching for a duplicate in a directory, you compute the fingerprints of all its files and store them in a HashSet<HashCode>. For a new file, you compute its fingerprint and look it up in the set.
I have a folder named collect, there will be some files such as selectData01.json, selectData02.json, selectData03.json and so on.
I have to count the account of the files at first, and then I will send a different file every minute.
Now I want to konw how to achieve the purpose
public String getData() {
String strLocation = new SendSituationData().getClass().getProtectionDomain().getCodeSource().getLocation().getPath();
log.info("strLocation = ");
// String strParent = new File(strLocation).getParent() + "/collectData/conf.properties";
// System.out.println("strParent = " + strParent);
File fileConf = new File("collect/");
System.out.println("fileConf = " + fileConf.getAbsolutePath());
List<List<String>> listFiles = new ArrayList<>();
//File root = new File(DashBoardListener.class.getClassLoader().getResource("collectData/").getPath());
//File root = new File("collectData/application.conf");
File root = new File(fileConf.getAbsolutePath());
System.out.println("root.listFiles( ) = " + root.listFiles( ));
Arrays
.stream(Objects.requireNonNull(root.listFiles( )))
.filter(file -> file.getName().endsWith("json"))
.map(File::toPath)
.forEach(path -> {
try {
//List<String> lines = Files.readAllLines(path);
//System.out.println("lines = " + lines);
List<String> lines = Files.readAllLines(path);
listFiles.add(lines);
} catch (IOException e) {
e.printStackTrace( );
}
});
String dataBody = listToString(listFiles.get(0));
//log.info(dataBody);
ResultMap result = buildRsult();
//String jsonString = JSON.toJSONString(result);
}
public static String listToString(List<String> stringList){
if (stringList == null) {
return null;
}
StringBuilder result=new StringBuilder();
boolean flag=false;
for (String string : stringList) {
if (flag) {
result.append("");
}else {
flag=true;
}
result.append(string);
}
return result.toString();
}
supplement
My friend, maybe i don't express my purpose explicitly. If I have three files, I will sent the first file in the 0:00, sent the second file in the 0:01, sent the third file in the 0:03, sent the first file in the 0:04, sent the second file in the 0:05 and so on.
If I have five files, I will sent the first file in the 0:00, sent the second file in the 0:01, sent the third file in the 0:03, sent the fourth file in the 0:04, sent the fifth file in the 0:05 and so on.
I want to know how to achieve the function
supplement
I have a struct Project that contains a folder named collect. Each file represents a string.
At first, I want to calculate the number of files in collect folder, and then I will send a file every minute.
Any suggestions?
I would use Apache camel with file2 component.
http://camel.apache.org/file2.html
Please read about 'noop' option before running any tests.
Processed files are deleted by default as far as I remember.
Update - simple example added:
I would recommend to start with https://start.spring.io/
Add at least two dependencies: Web and Camel (requires Spring Boot >=1.4.0.RELEASE and <2.0.0.M1)
Create new route, you can start from this example:
#Component
public class FileRouteBuilder extends RouteBuilder {
public static final String DESTINATION = "file://out/";
public static final String SOURCE = "file://in/?noop=true";
#Override
public void configure() throws Exception {
from(SOURCE)
.process(exchange -> {
//your processing here
})
.log("File: ${file:name} has been sent to: " + DESTINATION)
.to(DESTINATION);
}
}
My output:
2018-03-22 15:24:08.917 File: test1.txt has been sent to: file://out/
2018-03-22 15:24:08.931 File: test2.txt has been sent to: file://out/
2018-03-22 15:24:08.933 File: test3.txt has been sent to: file://out/
I have a problem that needs solving where we use OpenOffice 1.1.4 templated reports and programmatically export them to PDF.
The team who create the templates have recently changed the header image and some images in a table to background images (before they were just inserted) since this change the current program is not creating the PDFs with the images. We can export from OpenOffice manually and the images are included. Can anyone help with a change I may need to make to get these background images included please?
The current code:
private void print(XInterface xComponent,
PrintRequestDTO printReq, File sourceFile,
Vector<String> pages) throws java.lang.Exception {
String pageRange;
// XXX create the PDF via OOo export facility
com.sun.star.frame.XStorable pdfCreator = (com.sun.star.frame.XStorable) UnoRuntime
.queryInterface(
com.sun.star.frame.XStorable.class,
xComponent);
PropertyValue[] outputOpts = new PropertyValue[2];
outputOpts[0] = new PropertyValue();
outputOpts[0].Name = "CompressionMode";
outputOpts[0].Value = "1"; // XXX Change this perhaps?
outputOpts[1] = new PropertyValue();
outputOpts[1].Name = "PageRange";
if (printReq.getPageRange() == null) {
pageRange = "1-";
}
else {
if (printReq.getPageRange().length() > 0) {
pageRange = printReq.getPageRange();
}
else {
pageRange = "1-";
}
}
log.debug("Print Instruction - page range = "
+ pageRange);
PropertyValue[] filterOpts = new PropertyValue[3];
filterOpts[0] = new PropertyValue();
filterOpts[0].Name = "FilterName";
filterOpts[0].Value = "writer_pdf_Export"; // MS Word 97
filterOpts[1] = new PropertyValue();
filterOpts[1].Name = "Overwrite";
filterOpts[1].Value = new Boolean(true);
filterOpts[2] = new PropertyValue();
filterOpts[2].Name = "FilterData";
filterOpts[2].Value = outputOpts;
if (pages.size() == 0) { // ie no forced page breaks
// set page range
outputOpts[1].Value = pageRange;
filterOpts[2] = new PropertyValue();
filterOpts[2].Name = "FilterData";
filterOpts[2].Value = outputOpts;
File outputFile = new File(
sourceFile.getParent(),
printReq.getOutputFileName()
+ ".pdf");
StringBuffer sPDFUrl = new StringBuffer(
"file:///");
sPDFUrl.append(outputFile.getCanonicalPath()
.replace('\\', '/'));
log.debug("PDF file = " + sPDFUrl.toString());
if (pdfCreator != null) {
sleep();
pdfCreator.storeToURL(sPDFUrl.toString(),
filterOpts);
}
}
else if (pages.size() > 1) {
throw new PrintDocumentException(
"Only one forced split catered for currently");
}
else { // a forced split exists.
log.debug("Page break found in "
+ (String) pages.firstElement());
String[] newPageRanges = calculatePageRanges(
(String) pages.firstElement(), pageRange);
int rangeCount = newPageRanges.length;
for (int i = 0; i < rangeCount; i++) {
outputOpts[1].Value = newPageRanges[i];
log.debug("page range = " + newPageRanges[i]);
filterOpts[2] = new PropertyValue();
filterOpts[2].Name = "FilterData";
filterOpts[2].Value = outputOpts;
String fileExtension = (i == 0 && rangeCount > 1) ? "__Summary.pdf"
: ".pdf";
File outputFile = new File(
sourceFile.getParent(),
printReq.getOutputFileName()
+ fileExtension);
StringBuffer sPDFUrl = new StringBuffer(
"file:///");
sPDFUrl.append(outputFile.getCanonicalPath()
.replace('\\', '/'));
log.debug("PDF file = " + sPDFUrl.toString());
if (pdfCreator != null) {
log.debug("about to create the PDF file");
sleep();
pdfCreator.storeToURL(
sPDFUrl.toString(), filterOpts);
log.debug("done");
}
}
}
}
Thanks in advance.
Glad that suggestion of making the document visible helped. Since it has ALSO fixed the problem you have a timing/threading issue. I suspect you'll find that another dodgy option of doing a sleep before executing the save to PDF will also allow the images to appear. Neither of these solutions is good.
Most likley best fix is to upgrade to a newer version of Open Office (the API calls you have should still work). Another option would be to try to call the API to ask the document to refresh itself.
After finding the correct property I was able to open the file with the hidden property set to false, this meant when the file was exported to PDF it included the background images. Its a shame I could not find another solultion that kept the file hidden but at least its working.
I fail to understand why my code is giving me HDF5 Library Exceptions. It points at the createScalarDS method as the source of the error. But I believe this method does exist. Can anyone tell me why this code is unable to create an opaque dataset? What should the modification(s) be? Thanks.
public static void createFile(Message message) throws Exception {
// retrieve an instance of H5File
FileFormat fileFormat = FileFormat
.getFileFormat(FileFormat.FILE_TYPE_HDF5);
if (fileFormat == null) {
System.err.println("Cannot find HDF5 FileFormat.");
return;
}
// create a new file with a given file name.
H5File testFile = (H5File) fileFormat.create(fname);
if (testFile == null) {
System.err.println("Failed to create file:" + fname);
return;
}
// open the file and retrieve the root group
testFile.open();
Group root = (Group) ((javax.swing.tree.DefaultMutableTreeNode) testFile
.getRootNode()).getUserObject();
Group g1 = testFile.createGroup("byte arrays", root);
// obtaining the serialized object
byte[] b = serializer.serialize(message);
int len = b.length;
byte[] dset_data = new byte[len + 1];
// Initialize data.
int indx = 0;
for (int jndx = 0; jndx < len; jndx++)
dset_data[jndx] = b[jndx];
dset_data[len] = (byte) (indx);
// create opaque dataset ---- error hereā¦
Datatype dtype = testFile.createDatatype(Datatype.CLASS_OPAQUE,
(len * 4), Datatype.NATIVE, Datatype.NATIVE);
Dataset dataset = testFile.createScalarDS("byte array", g1, dtype,
dims1D, null, null, 0, dset_data);// error shown in this line
// close file resource
testFile.close();
}
I don't have grip on HDF5.
but, you can not directly use CLASS_OPAQUE
An opaque data type is a user-defined data type that can be used in the same way as a built-in data type. To create an opaque type check links:
http://idlastro.gsfc.nasa.gov/idl_html_help/Opaque_Datatypes.html
To create array datatype object:
Result = H5T_ARRAY_CREATE(Datatype_id, Dimensions)
example:
http://idlastro.gsfc.nasa.gov/idl_html_help/H5F_CREATE.html