I want to make a custom ASN.1 structure, what consists of 3 PrintableString's and 1 OctetString. I am using BouncyCastle framework to work with this.
So I set in my class the needed parameters, and now I have to return this structure in ASN.1 format, and then encode it with Base64 (its parameter is byte[]), then into PEM format.
So my question is, that what type of object do I have to return from method getASN1format()?
My code:
import org.bouncycastle.asn1.*;
import java.io.IOException;
public class ASN1Handshake1 implements ASN1Encodable {
private DERPrintableString A, B, ID_PASS;
private ASN1OctetString ID_K;
public ASN1Handshake1(String A, String B, String ID_K, String ID_PASS, TTP TTPs ) throws IOException {
this.A = new DERPrintableString(A);
this.B = new DERPrintableString(B);
this.ID_K = new DEROctetString(ID_K.getBytes());
this.ID_PASS = new DERPrintableString(ID_PASS);
}
public ?? getASN1format(){
//TODO
}
#Override
public ASN1Primitive toASN1Primitive() {
return null;
}
}
I'm using Bouncy Castle 1.57 (bcprov-jdk15on) for this code.
First of all, keep in mind that ASN.1 is not a format per se, it's a description language that defines a structure, and PEM is a format that uses base 64. Many cryptography standards use ASN.1 to define their data structures, and PEM or DER (Distinguished Encoding Rules) to serialize those structures.
So, if you want to get the ASN.1 structure and format it to base64, you can do as below. You don't need a getASN1format method, just use the existent ones.
The fields can't just be "loose" in the ASN.1 structure. So I decided to put them in a sequence (using org.bouncycastle.asn1.DERSequence class), which is the best choice to store fields of a structure. I put them in the order they're declared, but of course you can choose any order you want.
I've also changed the variables names to follow Java's code conventions (names start with lowercase letters). So the class code is:
import org.bouncycastle.asn1.ASN1Encodable;
import org.bouncycastle.asn1.ASN1Object;
import org.bouncycastle.asn1.ASN1OctetString;
import org.bouncycastle.asn1.ASN1Primitive;
import org.bouncycastle.asn1.ASN1Sequence;
import org.bouncycastle.asn1.DEROctetString;
import org.bouncycastle.asn1.DERPrintableString;
import org.bouncycastle.asn1.DERSequence;
public class ASN1Handshake1 extends ASN1Object {
private DERPrintableString a, b, idPass;
private ASN1OctetString idK;
// removed TTPs parameter (it wasn't using it)
public ASN1Handshake1(String a, String b, String idK, String idPass) {
this.a = new DERPrintableString(a);
this.b = new DERPrintableString(b);
this.idK = new DEROctetString(idK.getBytes());
this.idPass = new DERPrintableString(idPass);
}
// returns a DERSequence containing all the fields
#Override
public ASN1Primitive toASN1Primitive() {
ASN1Encodable[] v = new ASN1Encodable[] { this.a, this.b, this.idK, this.idPass };
return new DERSequence(v);
}
}
To create a handshake object and convert it to base64 (the code below is not handling exceptions, so add the try/catch block accordingly):
import org.bouncycastle.util.encoders.Base64;
// create handshake object with some sample data
ASN1Handshake1 handshake = new ASN1Handshake1("a", "b", "ID_K", "ID_PASS");
// convert it to base64
String base64String = new String(Base64.encode(handshake.getEncoded()));
System.out.println(base64String);
This will output the handshake structure in base64 format:
MBUTAWETAWIEBElEX0sTB0lEX1BBU1M=
Please note that this is not a complete PEM (with headers like -----BEGIN CERTIFICATE-----) because your custom structure is not a predefined standard. So, you'll have to stay with this base64 generic string.
To check that the base64 string contains the ASN.1 sequence, just do:
// read from base64 String
ASN1Sequence seq = (ASN1Sequence) DERSequence.fromByteArray(Base64.decode(base64String.getBytes()));
int n = seq.size();
for (int i = 0; i < n; i++) {
ASN1Encodable obj = seq.getObjectAt(i);
if (obj instanceof DEROctetString) {
System.out.println(new String(((DEROctetString) obj).getOctets()));
} else {
System.out.println(obj);
}
}
The output is:
a
b
ID_K
ID_PASS
Related
I have setup a Java program that I made for my apprenticeship project that takes in a JSON file of English strings and outputs a different language JSON file that is defined in the console. Some languages like french and Italian will output with the correct translations whereas Russian or Japanese will output with question marks as seen in the images bellow.
I had searched around at saw that I needed to get the bytes of my string and then encode that to UTF-8 I did do this but was still getting question marks so I started to use he standard charsets built into Java and tried different ways of encoding/decoding the string I tried this:
and this gave me a different output of this : Ð?Ñ?ивеÑ?
package com.bis.propertyfiletranslator;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.security.GeneralSecurityException;
import java.util.List;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.translate.Translate;
import com.google.api.services.translate.model.TranslationsListResponse;
import com.google.api.services.translate.model.TranslationsResource;
public class Translator {
public static Translate.Translations.List list;
private static final Charset UTF_8 = Charset.forName("UTF-8");
private static final Charset ISO = Charset.forName("ISO-8859-1");
public static void translateJSONMapThroughGoogle(String input, String output, String API, String language,
List<String> subLists) throws IOException, GeneralSecurityException {
Translate t = new Translate.Builder(GoogleNetHttpTransport.newTrustedTransport(),
JacksonFactory.getDefaultInstance(), null).setApplicationName("PhoenUX-Google-Translate").build();
try {
list = t.new Translations().list(subLists, language).setFormat("text");
list.setKey(API);
} catch (GoogleJsonResponseException e) {
if (e.getDetails().getMessage().equals("Invalid Value")) {
System.err.println(
"\n Language not currently supported, check the accepted language codes and try again.\n\n Language Requested: "
+ language);
} else {
System.out.println(e.getDetails().getMessage());
}
}
for (TranslationsResource translationsResource : response.getTranslations()) {
for (String key : JSONFunctions.jsonHashMap.keySet()) {
JSONFunctions.jsonHashMap.remove(key);
String value = translationsResource.getTranslatedText();
String encoded = new String(value.getBytes(StandardCharsets.UTF_8), StandardCharsets.ISO_8859_1);
JSONFunctions.jsonHashMap.put(key, encoded);
System.out.println(encoded);
break;
}
}
JSONFunctions.outputTranslationsBackToJson(output);
}
}
So this is using the google cloud library, I added a sysout so I could see the results of what I had tried, so this code should be all you need to replicate it.
I expect the output of "Hello" to be "Привет"(russian) actual output is ???? or Ð?Ñ?ивеÑ? dependent on the encoding I use.
String encoded = new String(...) is dead wrong. Just
put(key, value):
Note that System.out.println will always have problems as the OS encoding might be some Windows ANSI encoding. Then it is likely non Unicode-capable - and String contains Unicode.
I need to get big Boolean arrays or BitSets from Java into Python via a text file. Ideally I want to go via a Base64 representation to stay compact, but still be able to embed the value in a CSV file. (So the boolean array will be one column in a CSV file.)
However I am having issues to get the byte alignment right. Where/how should I specify the correct byte order?
This is one example, working in the sense that it executes but not working in that my bits aren't where I want them.
Java:
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.util.Base64;
import java.util.Base64.Encoder;
import java.util.BitSet;
public class basictest {
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
Encoder b64 = Base64.getEncoder();
String name = "name";
BitSet b = new BitSet();
b.set(444);
b.set(777);
b.set(555);
byte[] bBytes = b.toByteArray();
String fp_str = b64.encodeToString(bBytes);
BufferedWriter w = new BufferedWriter(new FileWriter("out.tsv"));
w.write(name + "\t" + fp_str + "\n");
w.close();
}
}
Python:
import numpy as np
import base64
from bitstring import BitArray, BitStream ,ConstBitStream
filename = "out.tsv"
with open(filename) as file:
data = file.readline().split('\t')
b_b64 = data[1]
b_bytes = base64.b64decode(b_b64)
b_bits = BitArray(bytes=b_bytes)
b_bits[444] # False
b_bits[555] # False
b_bits[777] # False
# but
b_bits[556] # True
# it's not shifted:
b_bits[445] # False
I am now reversing the bits in every byte using https://stackoverflow.com/a/5333563/1259675:
numbits = 8
r_bytes = [
sum(1<<(numbits-1-i) for i in range(numbits) if b>>i&1)
for b in b_bytes]
b_bits = BitArray(r_bytes)
This works, but is there a method that doesn't involve myself fiddling with the bits?
If:
the maximum bit to set is "sufficiently small".
and the data, you want to encode doesn't vary in size too much.
..then one approach can be:
Set max (+ min) significant bit(s in java) .
and ignore them in python .
, then it c(sh!)ould work without byte reversal, or further transformation:
// assuming a 1024 bit word
public static final int LEFT_SIGN = 0;
public static final int RIGHT_SIGN = 1025; //choose a size, that fits your needs [0 .. Integer.MAX_VALUE - 1 (<-theoretically)]
public static void main(String[] args) throws Exception {
...
b.set(LEFT_SIGN);
b.set(444 + 1);
b.set(777 + 1);
b.set(555 + 1);
b.set(RIGHT_SIGN);
...
and then in python:
# as before ..
b_bits[0] # Ignore!
b_bits[445] # True
b_bits[556] # True
b_bits[778] # True
b_bits[1025] # Ignore!;)
Your convenience (= encoding) 'd be the (maximum) "word length" ... with all its benefits and drawbacks.
We can use the bitarray package from python for this particular usecase.
from bitarray import bitarray
import base64
with open(filename) as file:
data = file.readline().strip().split('\t')
b_b64 = data[1]
b_bytes = base64.b64decode(b_b64)
bs = bitarray(endian='little')
bs.frombytes(b_bytes)
print bs
After a week of work I designed a binary file format, and made a Java reader for it. It's just an experiment, which works fine, unless I'm using the GZip compression function.
I called my binary type MBDF (Minimal Binary Database Format), and it can store 8 different types:
Integer (There is nothing like a byte, short, long or anything like that, since it is stored in flexible space (bigger numbers take more space))
Float-32 (32-bits floating point format, like java's float type)
Float-64 (64-bits floating point format, like java's double type)
String (A string in UTF-16 format)
Boolean
Null (Just specifies a null value)
Array (Something like java's ArrayList<Object>)
Compound (A String - Object map)
I used this data as test data:
COMPOUND {
float1: FLOAT_32 3.3
bool2: BOOLEAN true
float2: FLOAT_64 3.3
int1: INTEGER 3
compound1: COMPOUND {
xml: STRING "two length compound"
int: INTEGER 23
}
string1: STRING "Hello world!"
string2: STRING "3"
arr1: ARRAY [
STRING "Hello world!"
INTEGER 3
STRING "3"
FLOAT_32 3.29
FLOAT_64 249.2992
BOOLEAN true
COMPOUND {
str: STRING "one length compound"
}
BOOLEAN false
NULL null
]
bool1: BOOLEAN false
null1: NULL null
}
The xml key in a compound does matter!!
I made a file from it using this java code:
MBDFFile.writeMBDFToFile(
"/Users/<anonymous>/Documents/Java/MBDF/resources/file.mbdf",
b.makeMBDF(false)
);
Here, the variable b is a MBDFBinary object, containing all the data given above. With the makeMBDF function it generates the ISO 8859-1 encoded string and if the given boolean is true, it compresses the string using GZip. Then, when writing, an extra information character is added at the beginning of the file, containing information about how to read it back.
Then, after writing the file, I read it back into java and parse it
MBDF mbdf = MBDFFile.readMBDFFromFile("/Users/<anonymous>/Documents/Java/MBDF/resources/file.mbdf");
System.out.println(mbdf.getBinaryObject().parse());
This prints exactly the information mentioned above.
Then I try to use compression:
MBDFFile.writeMBDFToFile(
"/Users/<anonymous>/Documents/Java/MBDF/resources/file.mbdf",
b.makeMBDF(true)
);
I do exactly the same to read it back as I did with the uncompressed file, which should work. It prints this information:
COMPOUND {
float1: FLOAT_32 3.3
bool2: BOOLEAN true
float2: FLOAT_64 3.3
int1: INTEGER 3
compound1: COMPOUND {
xUT: STRING 'two length compound'
int: INTEGER 23
}
string1: STRING 'Hello world!'
string2: STRING '3'
arr1: ARRAY [
STRING 'Hello world!'
INTEGER 3
STRING '3'
FLOAT_32 3.29
FLOAT_64 249.2992
BOOLEAN true
COMPOUND {
str: STRING 'one length compound'
}
BOOLEAN false
NULL null
]
bool1: BOOLEAN false
null1: NULL null
}
Comparing it to the initial information, the name xml changed into xUT for some reason...
After some research I found little differences in binary data between before the compression and after the compression. Such patterns as 110011 change into 101010.
When I make the name xml longer, like xmldm, it is just parsed as xmldm for some reason.
I currently saw the problem only occur on names with three characters.
Directly compressing and decompressing the generated string (without saving it to a file and reading that) does work, so maybe the bug is caused by the file encoding.
As far as I know, the string output is in ISO 8859-1 format, but I couldn't get the file encoding right. When a file is read, it is read as it has to be read, and all the characters are read as ISO 8859-1 characters.
I've some things that could be a reason, I actually don't know how to test them:
The GZip output has a different encoding than the uncompressed encoding, causing small differences while storing as a file.
The file is stored as UTF-8 format, just ignoring the order to be ISO 8859-1 encoding ( don't know how to explain :) )
There is a little bug in the java GZip libraries.
But which one is true, and if none of them is right, what is the true reason for this bug?
I couldn't figure it out right now.
The MBDFFile class, reading and storing the files:
/* MBDFFile.java */
package com.redgalaxy.mbdf;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class MBDFFile {
public static MBDF readMBDFFromFile(String filename) throws IOException {
// FileInputStream is = new FileInputStream(filename);
// InputStreamReader isr = new InputStreamReader(is, "ISO-8859-1");
// BufferedReader br = new BufferedReader(isr);
//
// StringBuilder builder = new StringBuilder();
//
// String currentLine;
//
// while ((currentLine = br.readLine()) != null) {
// builder.append(currentLine);
// builder.append("\n");
// }
//
// builder.deleteCharAt(builder.length() - 1);
//
//
// br.close();
Path path = Paths.get(filename);
byte[] data = Files.readAllBytes(path);
return new MBDF(new String(data, "ISO-8859-1"));
}
private static void writeToFile(String filename, byte[] txt) throws IOException {
// BufferedWriter writer = new BufferedWriter(new FileWriter(filename));
//// FileWriter writer = new FileWriter(filename);
// writer.write(txt.getBytes("ISO-8859-1"));
// writer.close();
// PrintWriter pw = new PrintWriter(filename, "ISO-8859-1");
FileOutputStream stream = new FileOutputStream(filename);
stream.write(txt);
stream.close();
}
public static void writeMBDFToFile(String filename, MBDF info) throws IOException {
writeToFile(filename, info.pack().getBytes("ISO-8859-1"));
}
}
The pack function generates the final string for the file, in ISO 8859-1 format.
For all the other code, see my MBDF Github repository.
I commented the code I've tried, trying to show what I tried.
My workspace:
- Macbook Air '11 (High Sierra)
- IntellIJ Community 2017.3
- JDK 1.8
I hope this is enough information, this is actually the only way to make clear what I'm doing, and what exactly isn't working.
Edit: MBDF.java
/* MBDF.java */
package com.redgalaxy.mbdf;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
public class MBDF {
private String data;
private InfoTag tag;
public MBDF(String data) {
this.tag = new InfoTag((byte) data.charAt(0));
this.data = data.substring(1);
}
public MBDF(String data, InfoTag tag) {
this.tag = tag;
this.data = data;
}
public MBDFBinary getBinaryObject() throws IOException {
String uncompressed = data;
if (tag.isCompressed) {
uncompressed = GZipUtils.decompress(data);
}
Binary binary = getBinaryFrom8Bit(uncompressed);
return new MBDFBinary(binary.subBit(0, binary.getLen() - tag.trailing));
}
public static Binary getBinaryFrom8Bit(String s8bit) {
try {
byte[] bytes = s8bit.getBytes("ISO-8859-1");
return new Binary(bytes, bytes.length * 8);
} catch( UnsupportedEncodingException ignored ) {
// This is not gonna happen because encoding 'ISO-8859-1' is always supported.
return new Binary(new byte[0], 0);
}
}
public static String get8BitFromBinary(Binary binary) {
try {
return new String(binary.getByteArray(), "ISO-8859-1");
} catch( UnsupportedEncodingException ignored ) {
// This is not gonna happen because encoding 'ISO-8859-1' is always supported.
return "";
}
}
/*
* Adds leading zeroes to the binary string, so that the final amount of bits is 16
*/
private static String addLeadingZeroes(String bin, boolean is16) {
int len = bin.length();
long amount = (long) (is16 ? 16 : 8) - len;
// Create zeroes and append binary string
StringBuilder zeroes = new StringBuilder();
for( int i = 0; i < amount; i ++ ) {
zeroes.append(0);
}
zeroes.append(bin);
return zeroes.toString();
}
public String pack(){
return tag.getFilePrefixChar() + data;
}
public String getData() {
return data;
}
public InfoTag getTag() {
return tag;
}
}
This class contains the pack() method. data is already compressed here (if it should be).
For other classes, please watch the Github repository, I don't want to make my question too long.
Solved it by myself!
It seemed to be the reading and writing system. When I exported a file, I made a string using the ISO-8859-1 table to turn bytes into characters. I wrote that string to a text file, which is UTF-8. The big problem was that I used FileWriter instances to write it, which are for text files.
Reading used the inverse system. The complete file was read into memory as a string (memory consuming!!) and was then being decoded.
I didn't know a file was binary data, where specific formats of them form text data. ISO-8859-1 and UTF-8 are some of those formats. I had problems with UTF-8, because it splitted some characters into two bytes, which I couldn't manage...
My solution to it was to use streams. There exist FileInputStreams and FileOutputStreams in Java, which could be used for reading and writing binary files. I didn't use the streams, as I thought there was no big difference ("files are text, so what's the problem?"), but there is... I implemented this (by writing a new similar library) and I'm now able to pass every input stream to the decoder and every output stream to the encoder. To make uncompressed files, you need to pass a FileOutputStream. GZipped files could use GZipOutputStreams, relying on a FileOutputStream. If someone wants a string with the binary data, a ByteArrayOutputStream could be used. Same rules apply to reading, where the InputStream variant of the mentioned streams should be used.
No UTF-8 or ISO-8859-1 problems anymore, and it seemed to work, even with GZip!
I'm complete beginner in the world of cryptography. I have 2 files - one of them is a digital signature(p7s, signedData) and file which I should verify signature with. The question is how can I do that using Java? Firstly I thought that I can do like this
String rawString = ASN1ObjectIdentifier.fromByteArray(bytesArray).toString();
String rawStringForSurname = rawString.substring(rawString.indexOf("2.5.4.4,") + 9, rawString.length());
String signSurname = rawStringForSurname.substring(0, rawStringForSurname.indexOf("]"));
String rawStringForGivenName = rawString.substring(rawString.indexOf("2.5.4.42,") + 10, rawString.length());
String signGivenName = rawStringForGivenName.substring(0, rawStringForGivenName.indexOf("]"));
Which is awful, obviously. My input data is only intended to have one file (p7s file, which is later decoded to ASN.1 and verifies surname and fullName with author's (data from outside, string)). Surprisingly it turned out, that I should have a file which I should verify sign with as well. I know that there's strange hashcodes logic(that file is intact and the sign is related EXACTLY to the file). The question is how can I retrieve this data from file and sign? And what exactly should I compare in order to accept or reject it? The library I use is Bouncy Castle.
I use Demoiselle Framework to solve this in my project.
Especially the Signer Part
Maybe this share of code can help you too
For Example:
```java
package stackoverflow.my.pack;
import java.nio.file.Path;
import java.security.PrivateKey;
import java.security.cert.X509Certificate;
import java.util.Arrays;
import org.bouncycastle.asn1.x509.Certificate;
import org.demoiselle.signer.policy.engine.factory.PolicyFactory.Policies;
import org.demoiselle.signer.policy.impl.cades.factory.PKCS7Factory;
import org.demoiselle.signer.policy.impl.cades.pkcs7.PKCS7Signer;
public class MySigner {
private static PKCS7Signer signerLoader() {
PKCS7Signer signer = PKCS7Factory.getInstance().factoryDefault();
PrivateKey pk = MyReader.getPrivateKey();// Create some method to get your PK
signer.setPrivateKey(pk);
X509Certificate certificate = MyReader.getMyPublicKey();// Create some method to get your Pub
signer.setCertificates((Certificate[]) Arrays.asList(certificate).toArray());
if (is2048(privateKey)) {
signer.setSignaturePolicy(Policies.AD_RB_CADES_2_2);
} else {
signer.setSignaturePolicy(Policies.AD_RB_CADES_1_1);
}
return signer;
}
public static byte[] myMethodToSign(byte[] fileToBeSigned) throws MySignerException {
PKCS7Signer signer = signerLoader();
return signer.doHashSign(fileToBeSigned);
}
}
```
Demoiselle use BouncyCastle too, you can see their sign method to helps you.
You have to load your PK, add BouncyCastle as your Provider, get your SignPolicy Information, validate certificate chain, get a data generator and build your attribute table.
The way Demoiselle do.
If you want to see an example of using their Signer:
https://github.com/demoiselle/signer/blob/master/signer-examples/src/main/java/org/demoiselle/signer/signer/examples/Signer.java
I am able to serialize an object into a file and then restore it again as is shown in the next code snippet. I would like to serialize the object into a string and store into a database instead. Can anyone help me?
LinkedList<Diff_match_patch.Patch> patches = // whatever...
FileOutputStream fileStream = new FileOutputStream("foo.ser");
ObjectOutputStream os = new ObjectOutputStream(fileStream);
os.writeObject(patches1);
os.close();
FileInputStream fileInputStream = new FileInputStream("foo.ser");
ObjectInputStream oInputStream = new ObjectInputStream(fileInputStream);
Object one = oInputStream.readObject();
LinkedList<Diff_match_patch.Patch> patches3 = (LinkedList<Diff_match_patch.Patch>) one;
os.close();
Sergio:
You should use BLOB. It is pretty straighforward with JDBC.
The problem with the second code you posted is the encoding. You should additionally encode the bytes to make sure none of them fails.
If you still want to write it down into a String you can encode the bytes using java.util.Base64.
Still you should use CLOB as data type because you don't know how long the serialized data is going to be.
Here is a sample of how to use it.
import java.util.*;
import java.io.*;
/**
* Usage sample serializing SomeClass instance
*/
public class ToStringSample {
public static void main( String [] args ) throws IOException,
ClassNotFoundException {
String string = toString( new SomeClass() );
System.out.println(" Encoded serialized version " );
System.out.println( string );
SomeClass some = ( SomeClass ) fromString( string );
System.out.println( "\n\nReconstituted object");
System.out.println( some );
}
/** Read the object from Base64 string. */
private static Object fromString( String s ) throws IOException ,
ClassNotFoundException {
byte [] data = Base64.getDecoder().decode( s );
ObjectInputStream ois = new ObjectInputStream(
new ByteArrayInputStream( data ) );
Object o = ois.readObject();
ois.close();
return o;
}
/** Write the object to a Base64 string. */
private static String toString( Serializable o ) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream( baos );
oos.writeObject( o );
oos.close();
return Base64.getEncoder().encodeToString(baos.toByteArray());
}
}
/** Test subject. A very simple class. */
class SomeClass implements Serializable {
private final static long serialVersionUID = 1; // See Nick's comment below
int i = Integer.MAX_VALUE;
String s = "ABCDEFGHIJKLMNOP";
Double d = new Double( -1.0 );
public String toString(){
return "SomeClass instance says: Don't worry, "
+ "I'm healthy. Look, my data is i = " + i
+ ", s = " + s + ", d = " + d;
}
}
Output:
C:\samples>javac *.java
C:\samples>java ToStringSample
Encoded serialized version
rO0ABXNyAAlTb21lQ2xhc3MAAAAAAAAAAQIAA0kAAWlMAAFkdAASTGphdmEvbGFuZy9Eb3VibGU7T
AABc3QAEkxqYXZhL2xhbmcvU3RyaW5nO3hwf////3NyABBqYXZhLmxhbmcuRG91YmxlgLPCSilr+w
QCAAFEAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB4cL/wAAAAAAAAdAAQQUJ
DREVGR0hJSktMTU5PUA==
Reconstituted object
SomeClass instance says: Don't worry, I'm healthy. Look, my data is i = 2147483647, s = ABCDEFGHIJKLMNOP, d = -1.0
NOTE: for Java 7 and earlier you can see the original answer here
How about writing the data to a ByteArrayOutputStream instead of a FileOutputStream?
Otherwise, you could serialize the object using XMLEncoder, persist the XML, then deserialize via XMLDecoder.
Thanks for great and quick replies. I will gives some up votes inmediately to acknowledge your help. I have coded the best solution in my opinion based on your answers.
LinkedList<Patch> patches1 = diff.patch_make(text2, text1);
try {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream os = new ObjectOutputStream(bos);
os.writeObject(patches1);
String serialized_patches1 = bos.toString();
os.close();
ByteArrayInputStream bis = new ByteArrayInputStream(serialized_patches1.getBytes());
ObjectInputStream oInputStream = new ObjectInputStream(bis);
LinkedList<Patch> restored_patches1 = (LinkedList<Patch>) oInputStream.readObject();
// patches1 equals restored_patches1
oInputStream.close();
} catch(Exception ex) {
ex.printStackTrace();
}
Note i did not considered using JSON because is less efficient.
Note: I will considered your advice about not storing serialized object as strings in the database but byte[] instead.
Java8 approach, converting Object from/to String, inspired by answer from OscarRyz. For de-/encoding, java.util.Base64 is required and used.
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.Base64;
import java.util.Optional;
final class ObjectHelper {
private ObjectHelper() {}
static Optional<String> convertToString(final Serializable object) {
try (final ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(baos)) {
oos.writeObject(object);
return Optional.of(Base64.getEncoder().encodeToString(baos.toByteArray()));
} catch (final IOException e) {
e.printStackTrace();
return Optional.empty();
}
}
static <T extends Serializable> Optional<T> convertFrom(final String objectAsString) {
final byte[] data = Base64.getDecoder().decode(objectAsString);
try (final ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(data))) {
return Optional.of((T) ois.readObject());
} catch (final IOException | ClassNotFoundException e) {
e.printStackTrace();
return Optional.empty();
}
}
}
XStream provides a simple utility for serializing/deserializing to/from XML, and it's very quick. Storing XML CLOBs rather than binary BLOBS is going to be less fragile, not to mention more readable.
How about persisting the object as a blob
If you're storing an object as binary data in the database, then you really should use a BLOB datatype. The database is able to store it more efficiently, and you don't have to worry about encodings and the like. JDBC provides methods for creating and retrieving blobs in terms of streams. Use Java 6 if you can, it made some additions to the JDBC API that make dealing with blobs a whole lot easier.
If you absolutely need to store the data as a String, I would recommend XStream for XML-based storage (much easier than XMLEncoder), but alternative object representations might be just as useful (e.g. JSON). Your approach depends on why you actually need to store the object in this way.
Take a look at the java.sql.PreparedStatement class, specifically the function
http://java.sun.com/javase/6/docs/api/java/sql/PreparedStatement.html#setBinaryStream(int,%20java.io.InputStream)
Then take a look at the java.sql.ResultSet class, specifically the function
http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#getBinaryStream(int)
Keep in mind that if you are serializing an object into a database, and then you change the object in your code in a new version, the deserialization process can easily fail because your object's signature changed. I once made this mistake with storing a custom Preferences serialized and then making a change to the Preferences definition. Suddenly I couldn't read any of the previously serialized information.
You might be better off writing clunky per property columns in a table and composing and decomposing the object in this manner instead, to avoid this issue with object versions and deserialization. Or writing the properties into a hashmap of some sort, like a java.util.Properties object, and then serializing the properties object which is extremely unlikely to change.
The serialised stream is just a sequence of bytes (octets). So the question is how to convert a sequence of bytes to a String, and back again. Further it needs to use a limited set of character codes if it is going to be stored in a database.
The obvious solution to the problem is to change the field to a binary LOB. If you want to stick with a characer LOB, then you'll need to encode in some scheme such as base64, hex or uu.
You can use the build in classes sun.misc.Base64Decoder and sun.misc.Base64Encoder to convert the binary data of the serialize to a string. You das not need additional classes because it are build in.
Simple Solution,worked for me
public static byte[] serialize(Object obj) throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
ObjectOutputStream os = new ObjectOutputStream(out);
os.writeObject(obj);
return out.toByteArray();
}
Today the most obvious approach is to save the object(s) to JSON.
JSON is readable
JSON is more readable and easier to work with than XML.
A lot of Non-SQL databases that allow storing JSON directly.
Your client already communicates with the server using JSON. (If it doesn't, it is very likely a mistake.)
Example using Gson.
Gson gson = new Gson();
Person[] persons = getArrayOfPersons();
String json = gson.toJson(persons);
System.out.println(json);
//output: [{"name":"Tom","age":11},{"name":"Jack","age":12}]
Person[] personsFromJson = gson.fromJson(json, Person[].class);
//...
class Person {
public String name;
public int age;
}
Gson allows converting List directly. Examples can be easily
googled. I prefer to convert lists to arrays first.
you can use UUEncoding