iText Verify Signature with checking CRL - java

I am setting up a verifier which makes it possible to check the validity of signature.
The signature I do is based on DSS level LT so revocation checking is built into the document.
The problem that I encounter now is at the level of the verifier that I developed in iText. It allows the verification of the validity of the signature but of the information of the revocation. IText according to my research allows to verify this information in the signature itself based on: pkcs7.getCrl().
However, the DSS signature incorporates the revocation information into the dictionaries.
Below is the code I use to verify the signature:
import com.itextpdf.text.pdf.AcroFields;
import com.itextpdf.text.pdf.PdfDictionary;
import com.itextpdf.text.pdf.PdfName;
import com.itextpdf.text.pdf.PdfReader;
import com.itextpdf.text.pdf.PdfString;
import com.itextpdf.text.pdf.security.PdfPKCS7;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.security.GeneralSecurityException;
import java.security.Principal;
import java.security.cert.X509Certificate;
import java.util.Calendar;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.security.Security;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
public class TestCheck {
public static String pdf_file = "CURRENT_SIGNATURE.pdf";
public static final boolean verifySignature(PdfReader pdfReader)
throws GeneralSecurityException, IOException {
boolean valid = false;
AcroFields acroFields = pdfReader.getAcroFields();
PdfDictionary sigDict = acroFields.getSignatureDictionary("Signature1");
System.out.println(sigDict);
PdfString contents = sigDict.getAsString(PdfName.CONTENTS);
List<String> signatureNames = acroFields.getSignatureNames();
if (!signatureNames.isEmpty()) {
for (String name : signatureNames) {
// if (acroFields.signatureCoversWholeDocument(name)) {
PdfPKCS7 pkcs7 = acroFields.verifySignature(name);
valid = pkcs7.verify();
String reason = pkcs7.getReason();
Calendar signedAt = pkcs7.getSignDate();
X509Certificate signingCertificate = pkcs7.getSigningCertificate();
Principal issuerDN = signingCertificate.getIssuerDN();
Principal subjectDN = signingCertificate.getSubjectDN();
System.out.println("valid = "+valid);
//System.out.println("date = "+signedAt.getTime());
////System.out.println("reason = "+reason);
//System.out.println("issuer = "+issuerDN);
//System.out.println("subject = "+subjectDN);
System.out.println("CRL : " + pkcs7.getCRLs());
break;
}
// }
}
return valid;
}
public static void main(String[] args) throws Exception {
BouncyCastleProvider provider = new BouncyCastleProvider();
Security.addProvider(provider);
InputStream is = new FileInputStream(new File(pdf_file));
PdfReader reader = new PdfReader(is);
boolean ok = verifySignature(reader);
System.out.println("Ver : "+ ok);
}
}

Initially I wanted to simply point to the LtvVerifier class provided both by iText 5 and iText 7. Testing with that class, though, it turned out to not be applicable to the current PAdES BASELINE profiles but instead has been designed for an older PAdES-LTV profile (see ETSI TS 102 778-4 section 4 "Profile for PAdES-LTV").
If I understand your question correctly, though, you already know how to evaluate CRLs and OCSP responses. Thus, it would suffice if you learned how to extract the revocation information from the DSS dictionaries.
Your example code apparently uses iText 5.x, so I used the current iText 5.5.14-SNAPSHOT. A bit older versions should be usable with the same code, too.
PdfReader pdfReader = new PdfReader(...);
PdfDictionary dss = pdfReader.getCatalog().getAsDict(PdfName.DSS);
if (dss == null)
System.out.println("No DSS in PDF");
else {
PdfArray crlarray = dss.getAsArray(PdfName.CRLS);
if (crlarray == null || crlarray.size() == 0)
System.out.println("No CRLs in DSS");
else {
System.out.println("CRLs:");
CertificateFactory cf = CertificateFactory.getInstance("X.509");
for (int i = 0; i < crlarray.size(); i++) {
PRStream stream = (PRStream) crlarray.getAsStream(i);
X509CRL crl = (X509CRL)cf.generateCRL(new ByteArrayInputStream(PdfReader.getStreamBytes(stream)));
System.out.printf(" '%s' update %s\n", crl.getIssuerX500Principal(), crl.getThisUpdate());
}
}
PdfArray ocsparray = dss.getAsArray(PdfName.OCSPS);
if (ocsparray == null || ocsparray.size() == 0)
System.out.println("\nNo OCSP responses in DSS");
else {
System.out.println("\nOCSP Responses:");
for (int i = 0; i < ocsparray.size(); i++) {
PRStream stream = (PRStream) ocsparray.getAsStream(i);
OCSPResp ocspResponse = new OCSPResp(PdfReader.getStreamBytes(stream));
if (ocspResponse.getStatus() == 0) {
try {
BasicOCSPResp basicOCSPResp = (BasicOCSPResp) ocspResponse.getResponseObject();
System.out.printf(" '%s' update %s\n", basicOCSPResp.getResponderId(), basicOCSPResp.getProducedAt());
} catch (OCSPException e) {
throw new GeneralSecurityException(e);
}
}
}
}
}
(VerifyLtv test testExtractRevocationInformationCURRENT_SIGNATURE)
Instead of printing information to System.out you can of course collect the CRLs and OCSP responses and process them as you used to.
Also, you can of course check both the revocation data you retrieve from the PdfPKCS7 object and the data from the DSS. Adobe Acrobat also uses both during verification.

Related

pdf clown- not highlighting specific search keyword

I am using pdf-clown with pdfclown-0.2.0-HEAD.jar.I have written below code for highlighting search the keyword in Chinese language pdf file and same code is working fine with english pdf file.
import java.awt.Color;
import java.awt.Desktop;
import java.awt.geom.Rectangle2D;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.net.URL;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.io.BufferedInputStream;
import java.io.File;
import org.pdfclown.documents.Page;
import org.pdfclown.documents.contents.ITextString;
import org.pdfclown.documents.contents.TextChar;
import org.pdfclown.documents.contents.colorSpaces.DeviceRGBColor;
import org.pdfclown.documents.interaction.annotations.TextMarkup;
import org.pdfclown.documents.interaction.annotations.TextMarkup.MarkupTypeEnum;
import org.pdfclown.files.SerializationModeEnum;
import org.pdfclown.util.math.Interval;
import org.pdfclown.util.math.geom.Quad;
import org.pdfclown.tools.TextExtractor;
public class pdfclown2 {
private static int count;
public static void main(String[] args) throws IOException {
highlight("ebook.pdf","C:\\Users\\Downloads\\6.pdf");
System.out.println("OK");
}
private static void highlight(String inputPath, String outputPath) throws IOException {
URL url = new URL(inputPath);
InputStream in = new BufferedInputStream(url.openStream());
org.pdfclown.files.File file = null;
try {
file = new org.pdfclown.files.File("C:\\Users\\Desktop\\pdf\\test123.pdf");
Map<String, String> m = new HashMap<String, String>();
m.put("亿元或","hi");
m.put("收入亿来","hi");
System.out.println("map size"+m.size());
long startTime = System.currentTimeMillis();
// 2. Iterating through the document pages...
TextExtractor textExtractor = new TextExtractor(true, true);
for (final Page page : file.getDocument().getPages()) {
Map<Rectangle2D, List<ITextString>> textStrings = textExtractor.extract(page);
for (Map.Entry<String, String> entry : m.entrySet()) {
Pattern pattern;
String serachKey = entry.getKey();
final String translationKeyword = entry.getValue();
/*
if ((serachKey.contains(")") && serachKey.contains("("))
|| (serachKey.contains("(") && !serachKey.contains(")"))
|| (serachKey.contains(")") && !serachKey.contains("(")) || serachKey.contains("?")
|| serachKey.contains("*") || serachKey.contains("+")) {s
pattern = Pattern.compile(Pattern.quote(serachKey), Pattern.CASE_INSENSITIVE);
}
else*/
pattern = Pattern.compile(serachKey, Pattern.CASE_INSENSITIVE);
// 2.1. Extract the page text!
//System.out.println(textStrings.toString().indexOf(entry.getKey()));
// 2.2. Find the text pattern matches!
final Matcher matcher = pattern.matcher(TextExtractor.toString(textStrings));
// 2.3. Highlight the text pattern matches!
textExtractor.filter(textStrings, new TextExtractor.IIntervalFilter() {
public boolean hasNext() {
// System.out.println(matcher.find());
// if(key.getMatchCriteria() == 1){
if (matcher.find()) {
return true;
}
/*
* } else if(key.getMatchCriteria() == 2) { if
* (matcher.hitEnd()) { count++; return true; } }
*/
return false;
}
public Interval<Integer> next() {
return new Interval<Integer>(matcher.start(), matcher.end());
}
public void process(Interval<Integer> interval, ITextString match) {
// Defining the highlight box of the text pattern
// match...
System.out.println(match);
/* List<Quad> highlightQuads = new ArrayList<Quad>();
{
Rectangle2D textBox = null;
for (TextChar textChar : match.getTextChars()) {
Rectangle2D textCharBox = textChar.getBox();
if (textBox == null) {
textBox = (Rectangle2D) textCharBox.clone();
} else {
if (textCharBox.getY() > textBox.getMaxY()) {
highlightQuads.add(Quad.get(textBox));
textBox = (Rectangle2D) textCharBox.clone();
} else {
textBox.add(textCharBox);
}
}
}
textBox.setRect(textBox.getX(), textBox.getY(), textBox.getWidth(), textBox.getHeight());
highlightQuads.add(Quad.get(textBox));
}*/
List<Quad> highlightQuads = new ArrayList<Quad>();
List<TextChar> textChars = match.getTextChars();
Rectangle2D firstRect = textChars.get(0).getBox();
Rectangle2D lastRect = textChars.get(textChars.size()-1).getBox();
Rectangle2D rect = firstRect.createUnion(lastRect);
highlightQuads.add(Quad.get(rect).get(rect));
// subtype can be Highlight, Underline, StrikeOut, Squiggly
new TextMarkup(page, highlightQuads, translationKeyword, MarkupTypeEnum.Highlight);
}
public void remove() {
throw new UnsupportedOperationException();
}
});
}
}
SerializationModeEnum serializationMode = SerializationModeEnum.Standard;
file.save(new java.io.File(outputPath), serializationMode);
System.out.println("file created");
long endTime = System.currentTimeMillis();
System.out.println("seconds take for execution is:"+(endTime-startTime)/1000);
} catch (Exception e) {
e.printStackTrace();
}
finally{
in.close();
}
}
}
Kindly provide your inputs to highlight specific search keyword for non english pdf files.
I am serching the keyword in below text which is in chinese langauage.
普双套习近平修宪普京利用双套车绕开宪法装班要走普京
enter image description here
Your PDF Clown version
The PDF Clown version you retrieved here from Tymate's maven repository on github has been pushed there April 23rd, 2015. The final (as of now) check-in to the PDF Clown subversion source code repository TRUNK on sourceforge, on the other hand, is from May 27th, 2015. There actually are some 30 checkins after April 23rd, 2015. Thus, you definitely do not use the most current version of this apparently dead PDF library project.
Using the current 0.2.0 snapshot
I tested your code with the 0.2.0 development version compiled from that trunk and the result indeed is different:
It is better insofar as the highlights have the width of the sought character and are located nearer to the actual character position. There still is a bug, though, as the second and third match highlights are somewhat off.
Fixing the bug
The remaining problem actually is not related to the language of the text. It simply is a bug in the processing of one type of the PDF text drawing commands, so it can be observed in documents with text in arbitrary languages. Due to the fact that these commands nowadays are used very seldom only, though, the bug is hardly ever observed, let alone reported. Your PDF, on the other hand, makes use of that kind of text drawing commands.
The bug is in the ShowText class (package org.pdfclown.documents.contents.objects). At the end of the scan method the text line matrix in the graphics state is updated like this if the ShowText instance actually is a ShowTextToNextLine instance derived from it:
if(textScanner == null)
{
state.setTm(tm);
if(this instanceof ShowTextToNextLine)
{state.setTlm((AffineTransform)tm.clone());}
}
The text line matrix here is set to the text matrix after the move to the next line and the drawing of the text. This is wrong, it must instead be set to text matrix right after the move to the next line before the drawing of the text.
This can be fixed e.g. like this:
if(textScanner == null)
{
state.setTm(tm);
if(this instanceof ShowTextToNextLine)
state.getTlm().concatenate(new AffineTransform(1, 0, 0, 1, 0, -state.getLead()));
}
With this change in place the result looks like this:

Sawtooth Invalid Batch or Signature

I have started playing atound with Hyperledger Sawtooth recently, and having trouble to submit transactions on java, while python code seems okay.
I have prepared the python code based on the api docs here and then tried to write one in java as well. Below is the code in java
import com.google.protobuf.ByteString;
import com.mashape.unirest.http.Unirest;
import sawtooth.sdk.processor.Utils;
import sawtooth.sdk.protobuf.*;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.Signature;
import java.security.spec.ECGenParameterSpec;
public class BatchSender {
public static void main(String[] args) throws Exception{
KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("EC");
ECGenParameterSpec parameterSpec = new ECGenParameterSpec("secp256k1");
keyPairGenerator.initialize(parameterSpec);
KeyPair keyPair = keyPairGenerator.generateKeyPair();
Signature ecdsaSign = Signature.getInstance("SHA256withECDSA");
ecdsaSign.initSign(keyPair.getPrivate());
byte[] publicKeyBytes = keyPair.getPublic().getEncoded();
String publicKeyHex = Utils.hash512(publicKeyBytes);
ByteString publicKeyByteString = ByteString.copyFrom(new String(publicKeyBytes),"UTF-8");
String payload = "{'key':1, 'value':'value comes here'}";
String payloadBytes = Utils.hash512(payload.getBytes());
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
TransactionHeader txnHeader = TransactionHeader.newBuilder().
setBatcherPubkeyBytes(publicKeyByteString).
setFamilyName("plain_info").
setFamilyVersion("1.0").
addInputs("1cf1266e282c41be5e4254d8820772c5518a2c5a8c0c7f7eda19594a7eb539453e1ed7").
setNonce("1").
addOutputs("1cf1266e282c41be5e4254d8820772c5518a2c5a8c0c7f7eda19594a7eb539453e1ed7").
setPayloadEncoding("application/json").
setPayloadSha512(payloadBytes).
setSignerPubkey(publicKeyHex).build();
ByteString txnHeaderBytes = txnHeader.toByteString();
ecdsaSign.update(txnHeaderBytes.toByteArray());
byte[] txnHeaderSignature = ecdsaSign.sign();
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString).setHeaderSignature(Utils.hash512(txnHeaderSignature)).build();
BatchHeader batchHeader = BatchHeader.newBuilder().setSignerPubkey(publicKeyHex).addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
ecdsaSign.update(batchHeaderBytes.toByteArray());
byte[] batchHeaderSignature = ecdsaSign.sign();
Batch batch = Batch.newBuilder().setHeader(batchHeaderBytes).setHeaderSignature(Utils.hash512(batchHeaderSignature)).addTransactions(txn).build();
BatchList batchList = BatchList.newBuilder().addBatches( batch).build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://rest-api:8080/batches").header("Content-Type","application/octet-stream").body(batchBytes.toByteArray()).asString().getBody();
System.out.println(serverResponse);
}
}
Once I run it, I am getting
{
"error": {
"code": 30,
"message": "The submitted BatchList was rejected by the validator. It was poorly formed, or has an invalid signature.",
"title": "Submitted Batches Invalid"
}
}
On the dockers logs, I can see
sawtooth-validator-default | [2017-11-21 08:20:09.842 DEBUG interconnect] ServerThread receiving CLIENT_BATCH_SUBMIT_REQUEST message: 1242 bytes
sawtooth-validator-default | [2017-11-21 08:20:09.844 DEBUG signature_verifier] batch failed signature validation: 30a2f4a24be3e624f5a35b17cb505b65cb8dd41600545c6dcfac7534205091552e171082922d4eb71f1bb186fe49163f349c604b631f64fa8f1cfea1c8bb2818
sawtooth-validator-default | [2017-11-21 08:20:09.844 DEBUG interconnect] ServerThread sending CLIENT_BATCH_SUBMIT_RESPONSE to b'50b094689ac14b39'
I have checked the key sizes and verify the signature, and it seems all ok, however, i couldnt find why the batch is rejected...
Anyone had similar error response from sawtooth before? is it the batch format or still signature issue for the code above?
The problem is in setting the batch header signature and the transaction header signature. Those should not have the sha-512 hash taken of them. Those bytes should be encoded as hex strings.
Using the apache-commons-codec library that can be done as:
import org.apache.commons.codec.binary.Hex;
Transaction txn = Transaction.newBuilder()
.setHeader(txnHeaderBytes)
.setPayload(payloadByteString)
.setHeaderSignature(Hex.encodeHexString(txnHeaderSignature))
.build();
The Utils.sha512 should only be used on the payload bytes.
The Sawtooth Java SDK comes with a Signing class that can be helpful in
generating private/public key pairs and signing messages.
I have modified your code to use the specified class and included the changes from Boyd Johnson's answer for a working example (this is using Sawtooth Java SDK version 1.0):
import com.google.protobuf.ByteString;
import com.mashape.unirest.http.Unirest;
import com.mashape.unirest.http.exceptions.UnirestException;
import org.bitcoinj.core.ECKey;
import sawtooth.sdk.client.Signing;
import sawtooth.sdk.processor.Utils;
import sawtooth.sdk.protobuf.Batch;
import sawtooth.sdk.protobuf.BatchHeader;
import sawtooth.sdk.protobuf.BatchList;
import sawtooth.sdk.protobuf.Transaction;
import sawtooth.sdk.protobuf.TransactionHeader;
import java.security.SecureRandom;
public class BatchSender {
public static void main(String []args) throws UnirestException {
ECKey privateKey = Signing.generatePrivateKey(new SecureRandom());
String publicKey = Signing.getPublicKey(privateKey);
ByteString publicKeyByteString = ByteString.copyFromUtf8(publicKey);
String payload = "{'key':1, 'value':'value comes here'}";
String payloadBytes = Utils.hash512(payload.getBytes());
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
TransactionHeader txnHeader = TransactionHeader.newBuilder()
.setBatcherPublicKeyBytes(publicKeyByteString)
.setSignerPublicKeyBytes(publicKeyByteString)
.setFamilyName("plain_info")
.setFamilyVersion("1.0")
.addInputs("1cf1266e282c41be5e4254d8820772c5518a2c5a8c0c7f7eda19594a7eb539453e1ed7")
.setNonce("1")
.addOutputs("1cf1266e282c41be5e4254d8820772c5518a2c5a8c0c7f7eda19594a7eb539453e1ed7")
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKey)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
String txnHeaderSignature = Signing.sign(privateKey, txnHeaderBytes.toByteArray());
Transaction txn = Transaction.newBuilder()
.setHeader(txnHeaderBytes)
.setPayload(payloadByteString)
.setHeaderSignature(txnHeaderSignature)
.build();
BatchHeader batchHeader = BatchHeader.newBuilder()
.setSignerPublicKey(publicKey)
.addTransactionIds(txn.getHeaderSignature())
.build();
ByteString batchHeaderBytes = batchHeader.toByteString();
String batchHeaderSignature = Signing.sign(privateKey, batchHeaderBytes.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(batchHeaderSignature)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type","application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();
System.out.println(serverResponse);
}
}
I've just have the same issue. In my case the problem was in setting payload data.
Be sure that:
You made a sha512 hash of your PayloadByteArray and pass it to TransactionHeader creation
You pass PayloadByteArray to transaction creation.
So:
PayloadByteArray -> Transaction creation
sha512 Hash of PayloadByteArray -> TransactionHeader creation

Extract multiple X.509 certificates from PEM-formatted file in Java

I have a method which extracts a X.509 certificate from a given PEM-formatted file, using the bouncycastle library.
Imports:
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import org.bouncycastle.cert.X509CertificateHolder;
import org.bouncycastle.cert.jcajce.JcaX509CertificateConverter;
import org.bouncycastle.openssl.PEMParser;
Method:
/**
* Reads an X509 certificate from a PEM file.
*
* #param certificateFile The PEM file.
* #return the X509 certificate, or null.
* #throws IOException if reading the file fails
* #throws CertificateException if parsing the certificate fails
*/
public static X509Certificate readCertificatePEMFile(File certificateFile) throws IOException, CertificateException {
if (certificateFile.exists() && certificateFile.canRead()) {
try (InputStream inStream = new FileInputStream(certificateFile)) {
try (PEMParser pemParser = new PEMParser(new InputStreamReader(inStream))) {
Object object = pemParser.readObject();
if (object != null && object instanceof X509CertificateHolder) {
return new JcaX509CertificateConverter().getCertificate( (X509CertificateHolder)object );
}
}
}
}
return null;
}
This works well for "normal" certificate files, e.g. a server certificate.
If I have a CA chain certificate file, containing multiple certificates, how could I achieve extracting all certificates from this file (the method shown only extracts the first certificate in the file).
Try this code, it handles multiple certificates and Private key entry im PEM file
Security.addProvider(new BouncyCastleProvider());
JcaPEMKeyConverter converter = new JcaPEMKeyConverter().setProvider("BC");
while((object = pemParser.readObject())!=null)
{
if(object instanceof X509CertificateHolder)
{
X509Certificate x509Cert = (X509Certificate) new JcaX509CertificateConverter().getCertificate((X509CertificateHolder) object);
}
else if(object instanceof PEMEncryptedKeyPair)
{
if(password==null) throw new IllegalArgumentException("Password required for parsing RSA Private key");
PEMDecryptorProvider decProv = new JcePEMDecryptorProviderBuilder().build(password.toCharArray());
converter.getKeyPair(((PEMEncryptedKeyPair) object).decryptKeyPair(decProv));
}
else if(object instanceof PEMKeyPair)
{
converter.getKeyPair((PEMKeyPair) object);
}
}

Merging MS Word documents with Java

I'm looking for java libraries that read and write MS Word Document.
What I have to do is:
read a template file, .dot or .doc, and fill it with some data read from DB
take data from another Word document and merging that with the file described above, preserving paragraphs formats
users may make updates to the file.
I've searched and found POI Apache and UNO OpenOffice.
The first one can easily read a template and replace any placeholders with my own data from DB. I didn't found anything about merging two, or more, documents.
OpenOffice UNO looks more stable but complex too. Furthermore I'm not sure that it has the ability to merge documents..
We are looking the right direction?
Another solution i've thought was to convert doc file to docx. In that way I found more libraries that can help us merging documents.
But how can I do that?
Thanks!
You could take a look at Docmosis since it provides the four features you have mentioned (data population, template/document merging, DOC format and java interface). It has a couple of flavours (download, online service), but you could sign up for a free trial of the cloud service to see if Docmosis can do what you want (then you don't have to install anything) or read the online documentation.
It uses OpenOffice under the hood (you can see from the developer guide installation instructions) which does pretty decent conversions between documents. The UNO API has some complications - I would suggest either Docmosis or JODReports to isolate your project from UNO directly.
Hope that helps.
import java.io.File;
import java.util.List;
import javax.xml.bind.JAXBException;
import org.docx4j.dml.CTBlip;
import org.docx4j.openpackaging.exceptions.Docx4JException;
import org.docx4j.openpackaging.packages.WordprocessingMLPackage;
import org.docx4j.openpackaging.parts.Part;
import org.docx4j.openpackaging.parts.PartName;
import org.docx4j.openpackaging.parts.WordprocessingML.ImageBmpPart;
import org.docx4j.openpackaging.parts.WordprocessingML.ImageEpsPart;
import org.docx4j.openpackaging.parts.WordprocessingML.ImageGifPart;
import org.docx4j.openpackaging.parts.WordprocessingML.ImageJpegPart;
import org.docx4j.openpackaging.parts.WordprocessingML.ImagePngPart;
import org.docx4j.openpackaging.parts.WordprocessingML.ImageTiffPart;
import org.docx4j.openpackaging.parts.relationships.RelationshipsPart;
import org.docx4j.openpackaging.parts.relationships.RelationshipsPart.AddPartBehaviour;
import org.docx4j.relationships.Relationship;
public class MultipleDocMerge {
public static void main(String[] args) throws Docx4JException, JAXBException {
File first = new File("D:\\Mreg.docx");
File second = new File("D:\\Mreg1.docx");
File third = new File("D:\\Mreg4&19.docx");
File fourth = new File("D:\\test12.docx");
WordprocessingMLPackage f = WordprocessingMLPackage.load(first);
WordprocessingMLPackage s = WordprocessingMLPackage.load(second);
WordprocessingMLPackage a = WordprocessingMLPackage.load(third);
WordprocessingMLPackage e = WordprocessingMLPackage.load(fourth);
List body = s.getMainDocumentPart().getJAXBNodesViaXPath("//w:body", false);
for(Object b : body){
List filhos = ((org.docx4j.wml.Body)b).getContent();
for(Object k : filhos)
f.getMainDocumentPart().addObject(k);
}
List body1 = a.getMainDocumentPart().getJAXBNodesViaXPath("//w:body", false);
for(Object b : body1){
List filhos = ((org.docx4j.wml.Body)b).getContent();
for(Object k : filhos)
f.getMainDocumentPart().addObject(k);
}
List body2 = e.getMainDocumentPart().getJAXBNodesViaXPath("//w:body", false);
for(Object b : body2){
List filhos = ((org.docx4j.wml.Body)b).getContent();
for(Object k : filhos)
f.getMainDocumentPart().addObject(k);
}
List<Object> blips = e.getMainDocumentPart().getJAXBNodesViaXPath("//a:blip", false);
for(Object el : blips){
try {
CTBlip blip = (CTBlip) el;
RelationshipsPart parts = e.getMainDocumentPart().getRelationshipsPart();
Relationship rel = parts.getRelationshipByID(blip.getEmbed());
Part part = parts.getPart(rel);
if(part instanceof ImagePngPart)
System.out.println(((ImagePngPart) part).getBytes());
if(part instanceof ImageJpegPart)
System.out.println(((ImageJpegPart) part).getBytes());
if(part instanceof ImageBmpPart)
System.out.println(((ImageBmpPart) part).getBytes());
if(part instanceof ImageGifPart)
System.out.println(((ImageGifPart) part).getBytes());
if(part instanceof ImageEpsPart)
System.out.println(((ImageEpsPart) part).getBytes());
if(part instanceof ImageTiffPart)
System.out.println(((ImageTiffPart) part).getBytes());
Relationship newrel = f.getMainDocumentPart().addTargetPart(part,AddPartBehaviour.RENAME_IF_NAME_EXISTS);
blip.setEmbed(newrel.getId());
f.getMainDocumentPart().addTargetPart(e.getParts().getParts().get(new PartName("/word/"+rel.getTarget())));
} catch (Exception ex){
ex.printStackTrace();
} }
File saved = new File("D:\\saved1.docx");
f.save(saved);
}
}
I've developed the next class (using Apache POI):
import java.io.InputStream;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.List;
import org.apache.poi.openxml4j.opc.OPCPackage;
import org.apache.poi.xwpf.usermodel.XWPFDocument;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTBody;
public class WordMerge {
private final OutputStream result;
private final List<InputStream> inputs;
private XWPFDocument first;
public WordMerge(OutputStream result) {
this.result = result;
inputs = new ArrayList<>();
}
public void add(InputStream stream) throws Exception{
inputs.add(stream);
OPCPackage srcPackage = OPCPackage.open(stream);
XWPFDocument src1Document = new XWPFDocument(srcPackage);
if(inputs.size() == 1){
first = src1Document;
} else {
CTBody srcBody = src1Document.getDocument().getBody();
first.getDocument().addNewBody().set(srcBody);
}
}
public void doMerge() throws Exception{
first.write(result);
}
public void close() throws Exception{
result.flush();
result.close();
for (InputStream input : inputs) {
input.close();
}
}
}
And its use:
public static void main(String[] args) throws Exception {
FileOutputStream faos = new FileOutputStream("/home/victor/result.docx");
WordMerge wm = new WordMerge(faos);
wm.add( new FileInputStream("/home/victor/001.docx") );
wm.add( new FileInputStream("/home/victor/002.docx") );
wm.doMerge();
wm.close();
}
The Apache POI code does not work for Images.

Sign data using PKCS #7 in JAVA

I want to sign a text file (may be a .exe file or something else in the future)
using PKCS#7 and verify the signature using Java.
What do I need to know?
Where will I find an API (.jar and documentation)?
What are the steps I need to follow in order to sign data and verify the data?
Please provide me code snippet if possible.
I reckon you need the following 2 Bouncy Castle jars to generate the PKCS7 digital signature:
bcprov-jdk15on-147.jar (for JDK 1.5 - JDK 1.7)
bcmail-jdk15on-147.jar (for JDK 1.5 - JDK 1.7)
You can download the Bouncy Castle jars from here.
You need to setup your keystore with the public & private key pair.
You need only the private key to generate the digital signature & the public key to verify it.
Here's how you'd pkcs7 sign content (Exception handling omitted for brevity) :
import java.io.FileInputStream;
import java.io.InputStream;
import java.security.KeyStore;
import java.security.PrivateKey;
import java.security.Security;
import java.security.cert.Certificate;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.List;
import org.bouncycastle.cert.jcajce.JcaCertStore;
import org.bouncycastle.cms.CMSProcessableByteArray;
import org.bouncycastle.cms.CMSSignedData;
import org.bouncycastle.cms.CMSSignedDataGenerator;
import org.bouncycastle.cms.CMSTypedData;
import org.bouncycastle.cms.jcajce.JcaSignerInfoGeneratorBuilder;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.operator.ContentSigner;
import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
import org.bouncycastle.operator.jcajce.JcaDigestCalculatorProviderBuilder;
import org.bouncycastle.util.Store;
import org.bouncycastle.util.encoders.Base64;
public final class PKCS7Signer {
private static final String PATH_TO_KEYSTORE = "/path/to/keyStore";
private static final String KEY_ALIAS_IN_KEYSTORE = "My_Private_Key";
private static final String KEYSTORE_PASSWORD = "MyPassword";
private static final String SIGNATUREALGO = "SHA1withRSA";
public PKCS7Signer() {
}
KeyStore loadKeyStore() throws Exception {
KeyStore keystore = KeyStore.getInstance("JKS");
InputStream is = new FileInputStream(PATH_TO_KEYSTORE);
keystore.load(is, KEYSTORE_PASSWORD.toCharArray());
return keystore;
}
CMSSignedDataGenerator setUpProvider(final KeyStore keystore) throws Exception {
Security.addProvider(new BouncyCastleProvider());
Certificate[] certchain = (Certificate[]) keystore.getCertificateChain(KEY_ALIAS_IN_KEYSTORE);
final List<Certificate> certlist = new ArrayList<Certificate>();
for (int i = 0, length = certchain == null ? 0 : certchain.length; i < length; i++) {
certlist.add(certchain[i]);
}
Store certstore = new JcaCertStore(certlist);
Certificate cert = keystore.getCertificate(KEY_ALIAS_IN_KEYSTORE);
ContentSigner signer = new JcaContentSignerBuilder(SIGNATUREALGO).setProvider("BC").
build((PrivateKey) (keystore.getKey(KEY_ALIAS_IN_KEYSTORE, KEYSTORE_PASSWORD.toCharArray())));
CMSSignedDataGenerator generator = new CMSSignedDataGenerator();
generator.addSignerInfoGenerator(new JcaSignerInfoGeneratorBuilder(new JcaDigestCalculatorProviderBuilder().setProvider("BC").
build()).build(signer, (X509Certificate) cert));
generator.addCertificates(certstore);
return generator;
}
byte[] signPkcs7(final byte[] content, final CMSSignedDataGenerator generator) throws Exception {
CMSTypedData cmsdata = new CMSProcessableByteArray(content);
CMSSignedData signeddata = generator.generate(cmsdata, true);
return signeddata.getEncoded();
}
public static void main(String[] args) throws Exception {
PKCS7Signer signer = new PKCS7Signer();
KeyStore keyStore = signer.loadKeyStore();
CMSSignedDataGenerator signatureGenerator = signer.setUpProvider(keyStore);
String content = "some bytes to be signed";
byte[] signedBytes = signer.signPkcs7(content.getBytes("UTF-8"), signatureGenerator);
System.out.println("Signed Encoded Bytes: " + new String(Base64.encode(signedBytes)));
}
}
PKCS#7 is known as CMS now (Cryptographic Message Syntax), and you will need the Bouncy Castle PKIX libraries to create one. It has ample documentation and a well established mailing list.
I won't supply code snippet, it is against house rules. Try yourself first.

Categories

Resources