I exported .key and .req file by my java application.
Code:
BufferedOutputStream bos1 = new BufferedOutputStream( new FileOutputStream(txtRequest.getText()));
bos1.write(certificate.getBytes());
bos1.close();
BufferedWriter bw = new BufferedWriter(new FileWriter(txtPrivateKey.getText()));
PEMWriter writer = new PEMWriter(bw);
writer.writeObject(getPrivateKey());
writer.close();
bw.close();
I have a problem that when I export .pfx file by using these .req and .key files. It's only get to install window XP and not get to install window server 2008 and window 7.
How should I do? Please show the way.
PKCS12 - can be read by openssl tool and can be exported too.
http://en.wikipedia.org/wiki/PKCS12
Related
I'm dealing with some gzipped pack200 files and have no trouble unpacking them with the command line tool. I only run into problems when I attempt to unpack the files with the pack200 library.
For reference, this is the method I am using to unpack the files:
//Output from this can be properly unpacked with command line tool
InputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed));
//This is where things go awry
Pack200.Unpacker unpacker = Pack200.newUnpacker();
JarOutputStream out = new JarOutputStream(new FileOutputStream("file.jar"));
unpacker.unpack(in, out);
Here is the output of unpacker.properties():
com.sun.java.util.jar.pack.default.timezone: false
com.sun.java.util.jar.pack.disable.native: false
com.sun.java.util.jar.pack.verbose: 0
pack.class.attribute.CompilationID: RUH
pack.class.attribute.SourceID: RUH
pack.code.attribute.CharacterRangeTable: NH[PHPOHIIH]
pack.code.attribute.CoverageTable: NH[PHHII]
pack.deflate.hint: keep
pack.effort: 5
pack.keep.file.order: true
pack.modification.time: keep
pack.segment.limit: -1
pack.unknown.attribute: pass
Some other relevant information:
The jar files output by the library are consistently smaller than those unpacked by the command line tool.
The library generated files use a newer version of the .zip format (0x14 vs 0x0A).
unpack200.exe version 1.30, 07/05/05
jdk version 1.7.0_21
So to reiterate, the jar files generated by the command line tool function properly while those generated by the library do not.
I very much appreciate any help or guidance.
It was something very simple, but I'm happy to have found the problem. Here is the solution I was able to use:
//Output from this can be properly unpacked with command line tool
InputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed));
//This is where things go awry
Pack200.Unpacker unpacker = Pack200.newUnpacker();
JarOutputStream out = new JarOutputStream(new FileOutputStream("file.jar"));
unpacker.unpack(in, out);
out.close();
Don't forget your JarOutPutStream.close();, kids.
I currently have a spring boot web application. The application writes to a file every time the web app is refreshed. Locally I am able to see the files in the root path directory. But when I upload my .jar file to cloud foundry how would I be able to obtain those files that are being written?
Script snippet writing to file
try{
Date date = new Date();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH-mm-ss");
File file = new File(dateFormat.format(date) + "data.txt");
BufferedWriter out = new BufferedWriter(new FileWriter(file));
out.write("Some Data is being written");
out.close;
}
I am able to find data.txt in my root folder. But How can I get those files after I package my application to a jar, and push it to cloud foundry.
Cf push command
-cf push testapp -p target/webapp.jar
I hope this doc will be useful for you:
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-traditional-deployment.html#howto-convert-an-existing-application-to-spring-boot
Try this:
cf ssh-code and then:
scp -P 2222 -o User=cf:<APP_GUID>/0 ssh.<system domain>:/<path>/<file name> <LOCAL_FILE_SYS>
Thanks,
Chandan
I'm trying to transfer a pgp file with apache.commons.net.ftp.FTPClient, result seems successfully, but when I want to convert it to a txt file I encounter to this error:
gpg: [don't know]: invalid packet (ctb=20)
and when I check the exact size of downloaded file, I notice that it's size is about 1KB less than original file.
here is the code for downloading file:
FileOutputStream fos = new FileOutputStream(Localfilename);
InputStream inputStream = ftpClient.retrieveFileStream(remoteFileDir);
IOUtils.copy(inputStream, fos);
fos.flush();
IOUtils.closeQuietly(fos);
IOUtils.closeQuietly(inputStream);
boolean commandOK = ftpClient.completePendingCommand();
can any one understand what is mistake with my way or code?
[edited] noted that the original file decode (convert to txt)successfully, so the problem occures while downloading file.
[edited2] I run the program in my windows desktop and download file in windows, no problem for decode, and I understand that when I run my program with linux server this problem appears!
I found my problem!
The problem was with addressing the remote path, a silly mistake!
so If any one has this problem recheck and recheck again the addresses.
I have a Java class that upload a text file from a Windows client to a Linux server.
The file I am triyng to upload is encoded using Cp1252 or ISO-8859-1.
When the file is uploaded, it become encoded using utf-8, then strings containing accents like éèà can't be read.
The command
file -i *
in the linux server tells me that it's encoded using utf-8.
I think the encoding was changed diring the upload, so I added this code to my servlet:
String currentEncoding=System.getProperty("file.encoding");
System.setProperty("file.encoding", "Cp1252");
item.write(file);
System.setProperty("file.encoding", currentEncoding);
In the jsp file, I have this code:
<form name="formUpload"
action="..." method="post"
enctype="multipart/form-data" accept-charset="ISO-8859-1">
The lib I use to upload a file is apache commun.
Doe's any one have a clue, cause I'm really runnig out of ideas!
Thanks,
Otmane MALIH
Setting the system property file.encoding will only work when you start Java. Instead, you will have to open the file with this code:
public static BufferedWriter createWriter( File file, Charset charset ) throws IOException {
FileOutputStream stream = new FileOutputStream( file );
return new BufferedWriter( new OutputStreamWriter( stream, charset ) );
}
Use Charset.forName("iso8859-1") as charset parameter.
[EDIT] Your problem is most likely the file command. MacOS is the only OS in the world which can tell you the encoding of a file with confidence. Windows and Linux have to make a guess. This guess can be wrong.
So what you need to do is to open the file with an editor where you specify the encoding. You need to do that on Windows (to make sure that the file really was saved with Cp1252; some applications ignore the platform and always safe their data in UTF-8).
And you need to do the same on Linux. If you just open the file, the editor will take the platform encoding (which is UTF-8 on modern Linux systems) and try to read the file with that -> ISO-8859-1 umlauts will be garbled. But if you open the file with ISO-8859-1, then UTF-8 will be garbled. That's the only way to be sure what the encoding of a text file really is.
I am using java.io.File to create a new file in java.On Mac-Os I am using the same code which was working fine on windows but its not working on mac.
I am using
File file = new File(path);
to create a new file.
In windows:
I used String path = "C:\test\1.html";
It was working fine.
On a Mac:
I want to create file at "/Users/pls/1.html"
But it is throwing error as
java.io.FileNotFoundException : /Users/pls/1.html (No such file or directory)
Please help
Don't write separators manually,Use system Independent File.separator instead.
String path = File.separator+"Users"+File.separator+"pls"+File.separator+"1.html";
File file = new File(path);