Convert PDF to Base64 and store data to BLOB of Database - java

I want to binary data (e.g. a PDF) into a BLOB of my Oracle database.
At first I putted the PDF into a FileInputStream and created a byte-array.Here is the code for that:
public static byte[] createByteArray(File pCurrentFolder, String pNameOfBinaryFile)
{
String pathToBinaryData = pCurrentFolder.getAbsolutePath()+"/"+pNameOfBinaryFile;
File file = new File(pathToBinaryData);
if (!file.exists())
{
System.out.println(pNameOfBinaryFile+" could not be found in folder "+pCurrentFolder.getName());
return null;
}
FileInputStream fin = null;
try {
fin = new FileInputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
byte fileContent[] = new byte[(int) file.length()];
try {
fin.read(fileContent);
} catch (IOException e) {
e.printStackTrace();
}
return fileContent;
}
I sent this (the byte array) via MyBatis to the database and it worked, so that I had the PDF in my BLOB and I also could read the PDF from my database.
But now I face the following problem:
I have a JDBC Connector for my search engine (FAST ESP...but that dowsnt matter) which connects to a certain database and stores all the content to a xml file. Inside this xml file is an element called "data" which contains the binary data inside its CDATA Field.
When I want to parse this xml, Java tells me:
The content of elements must consist of well-formed character data or markup.
With some PDF's i works but with some not. So I think the problem is, that I have stored them in the database in the wrong way.
For further information I would reverence to another questions I asked before which is similar to that.
Java: skip binary data in xml file while parsing
Someone there told me that I should encode my PDF (or any binary file) with base64. So that would mean, I do not just put my PDF into a FileInputStream, store the byte[] and put this byte[] to my BLOB of the database.
What do I have to do, to store the PDF in correct way inside my database, so that afterwards I can correctly parse my XML file the JDBC connector creates?

You can use the JAXB DatatypeConverter class to easily convert your data to base64 without any external dependencies:
byte[] arr = YOUR_BINARY_ARRAY;
String result = javax.xml.bind.DatatypeConverter.printBase64Binary(arr);
You can simply add this code to the end of your method and change its return type to a String.

You can try to first convert the bytes to basse64 using Apache Commons as this example:
import org.apache.commons.codec.binary.Base64;
import java.util.Arrays;
public class Base64Encode {
public static void main(String[] args) {
String hello = "Hello World";
byte[] encoded = Base64.encodeBase64(hello.getBytes());
System.out.println(Arrays.toString(encoded));
String encodedString = new String(encoded);
System.out.println(hello + " = " + encodedString);
}
}

Related

How to get svg in database injava

I am having 1 problem. I save SVG images in the database as binary. Now, I want to download it without converting to base64, is there any way. Thank you.
Basically, that would mean getting the BLOB object from the database.
I would follow this approach to show it in directly in the browser:
#RestController
public class ImageController {
#GetMapping(value = "/img-test", produces = "image/svg+xml")
public byte[] getImg() throws IOException
{
// retrieve your image from the DB
File imgFile = new File("C:\\Users\\USR\\Desktop\\img.svg");
InputStream inp = new DataInputStream(new FileInputStream(imgFile));
return inp.readAllBytes(); // This is a Java 9 specific convertion
}
}
With this approach, you do not change anything on the BLOB image. You take it and return it as is, an array with bytes. And you can directly show it in a browser or embed it somewhere in your HTML file.
The main thing here is the MIME type : image/svg+xml
If you are using an older version of Java, then check this question for the conversion of the InputStream object to a byte array.
And with this approach you can download the file:
#GetMapping("download-img")
public ResponseEntity downloadImg() throws IOException
{
// Get the file from the DB...
File imgFile = new File("C:\\Users\\USR\\Desktop\\img.svg");
InputStream inp = new DataInputStream(new FileInputStream(imgFile));
//Dynamically change the File Name here
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"img.svg\"")
.body(inp.readAllBytes());
}

Read a file using Java from an S3 bucket and HTTP PUT file to presigned AWS S3 URL of another bucket in a way that simulates an actual file upload

New to Java and HTTP requests.
Why this question is not a duplicate: I'm not using AWS SDK to generate any presigned URL. I get it from an external API.
Here is what I'm trying to accomplish:
Step 1: Read the source S3 bucket for a file (for now .xlsx)
Step 2: Parse this file by converting it to an InputStreamReader (I need help here)
Step 3: Do a HTTP PUT of this file by transferring the contents of the InputStreamReader to an OutputStreamWriter, on a pre-signed S3 URL that I already have obtained from an external team. The file must sit in the destination S3 bucket, in the exact way a file is uploaded manually by dragging and dropping. (Also need help here)
Here is what I've tried:
Step 1: Read the S3 bucket for the file
public class LambdaMain implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(final S3Event event, final Context context) {
System.out.println("Create object was called on the S3 bucket");
S3EventNotification.S3EventNotificationRecord record = event.getRecords().get(0);
String srcBucket = record.getS3().getBucket().getName();
String srcKey = record.getS3().getObject().getUrlDecodedKey();
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
S3Object s3Object = s3Client.getObject(new GetObjectRequest(
srcBucket, srcKey));
String presignedS3Url = //Assume that I have this by making an external API call
InputStreamReader inputStreamReader = parseFileFromS3(s3Object); #Step 2
int responseCode = putContentIntoS3URL(inputStreamReader, presignedS3Url); #Step 3
}
Step 2: Parse the file into an InputStreamReader to copy it to an OutputStreamWriter:
private InputStreamReader parseFileFromS3(S3Object s3Object) {
return new InputStreamReader(s3Object.getObjectContent(), StandardCharsets.UTF_8);
}
Step 3: Make a HTTP PUT call by copying the contents from InputStreamReader to OutputStreamWriter:
private int putContentIntoS3URL(InputStreamReader inputStreamReader, String presignedS3Url) {
URL url = null;
try {
url = new URL(presignedS3Url);
} catch (MalformedURLException e) {
e.printStackTrace();
}
HttpURLConnection httpCon = null;
try {
assert url != null;
httpCon = (HttpURLConnection) url.openConnection();
} catch (IOException e) {
e.printStackTrace();
}
httpCon.setDoOutput(true);
try {
httpCon.setRequestMethod("PUT");
} catch (ProtocolException e) {
e.printStackTrace();
}
OutputStreamWriter outputStreamWriter = null;
try {
outputStreamWriter = new OutputStreamWriter(
httpCon.getOutputStream());
} catch (IOException e) {
e.printStackTrace();
}
try {
IOUtils.copy(inputStreamReader, outputStreamWriter);
} catch (IOException e) {
e.printStackTrace();
}
try {
outputStreamWriter.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
httpCon.getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
int responseCode = 0;
try {
responseCode = httpCon.getResponseCode();
} catch (IOException e) {
e.printStackTrace();
}
return responseCode;
}
The issue with the among approach is that when I read an .xlsx file via an S3 insert trigger and PUT into the URL, when I download the uploaded file - it gets downloaded as some gibberish.
When I try reading in a .png file and PUT into the URL, when I download the uploaded file - it gets downloaded as some text file with some gibberish (I did see the word PNG in it though)
It feels like I'm making mistakes with:
Incorrectly creating an OutputStreamWriter since I don't understand how to send a file via a HTTP request
Assuming that every file type can be handled in a generic way.
Not setting the content-type in the HTTP request
Expecting S3 to magically understand my file type after the PUT operation
I would like to know if my above 4 assumptions are correct or incorrect.
The intention is that, I do the PUT on the file data correctly so it sits in the S3 bucket along with the correct file type/extension. I hope my effort is worthy to garner some help. I've done a lot of searching into HTTP PUT and File/IO, but I'm unable to LINK them together for my use-case, since I perform a File I/O followed by a HTTP PUT.
UPDATE 1:
I've added the setRequestProperty("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"), but the file doesn't sit in the S3 bucket with the file extension. It simply sits there as an object.
UPDATE 2:
I think this also has something to do with setContentDisposition() header, although I'm not sure how I go about setting these headers for Excel files.
UPDATE 3:
This may simply have to do with how the Presigned S3 URL itself is vended out to us. As mentioned in the question, I said that we get the Presigned S3 URL from some other team. The question itself has multiple parts that need answering.
Does the default Presigned S3 URL ALLOW clients to set the content-type and content-disposition in the HTTP header?: I've set up another separate question here since it's quite unclear: Can a client set file name and extension programmatically when he PUTs file content to a presigned S3 URL that the service vends out?
If the answer to above question is TRUE, then and only then must we go into how to set the file contents and write it to the OutputStream
You are using InputStreamReader and OutputStreamWriter, which are both bridges between a byte stream and a character stream. However, you are using these with byte data, which means you first convert your bytes to characters, and then back to bytes. Since your data is not character data, this conversion might explain why you get gibberish as a result.
I'd start trying to get rid of the reader and writer, instead directly using the InputStream (which you already got from s3Object.getObjectContent()), and the OutputStream (which you got from httpCon.getOutputStream()). IOUtils.copy should also support this.
Also as a side note, when you construct the InputStreamReader you set StandardCharsets.UTF_8 as the charset to use, but when you construct the OutputStreamWriter you don't set the charset. Should the default charset not be UTF-8, this conversion would probably also result in gibberish.

how to write a string which starts with - into csv file?

I am trying to write data to CSV file.
The string value which starts with - is getting converted to #NAME? automatically when i open csv file after writing. e.g. If i write test it displays correctly but when i write -test the value would be #NAME? when i open csv file. It is not a code issue but csv file automatically changes the value which starts with - to error(#NAME?). How can i correct this programmatically. below is the code,
public class FileWriterTest {
public static void main(String[] args) {
BufferedWriter bufferedWriter = null;
File file = new File("test.csv");
try {
bufferedWriter = new BufferedWriter(new FileWriter(file));
List<String> records = getRecords();
for (String record : records) {
bufferedWriter.write(record);
bufferedWriter.newLine();
}
bufferedWriter.flush();
System.out.println("Completed writing data to a file.");
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (bufferedWriter != null)
bufferedWriter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
public static List<String> getRecords() {
List<String> al = new ArrayList<String>();
String s1 = "test";
String s2 = "-test";
al.add(s1);
al.add(s2);
return al;
}
}
Could you please assist?
It's a problem with excel. When you open a CSV file in excel it tries to determine cell type automatically which usually fails. The CSV file is alright the editor is not ;)
You can either right click on the field, select Format Cell and there make it a text file (and you might need to remove the automatically inserted '=' sign). Or you can open the CSV file by going into Data - From Text/CSV and in the wizard select the proper column types.
In the formal CSV standard, you can do this by using quotes (") around the field value. They're a text delimiter (as opposed to other kinds of values, like numeric ones).
It sounds like you're using Excel. You may need to enable " as a text delimiter/indicator.
Update: If you double-click the .csv to open it in Excel, even this doesn't work. You have to open a workbook and then import the CSV data into it. (Pathetic, really...)
I got a relatively old version of Excel (2007), and the following works perfectly:
Put the text between double quotes and preceed it with an equal sign.
I.e., -test becomes ="-test".
You file will therefore look like this:
test1,test2,test3
test4,="-test5",test6
UPDATE
Works in Excel-2010 as well.
As Veselin Davidov mentioned, this will break the csv standard but I don't know whether that's a problem.

Android Studio can't find my JSON file

I created a raw Android resource directory in res and added cars.json to it. It shows up on my package explorer. When I try to call
File file = new File("raw/cars.json")
I get a FileNotFoundException. I've tried using simply cars.json and even the entire path to the file but I get the same results.
Any suggestions?
To get a resource from the raw directory call getResources().openRawResource(resourceName).
This will give you a InputStream
You can use this function directly:
private String getJsonStringFromRaw(filenmae)
{
String jsonString = null;
try {
InputStream inputStream= getAssets().open(filename);
int size = inputStream.available();
byte[] buffer= new byte[size];
inputStream.read(buffer);
inputStream.close();
jsonString = new String(buffer, "UTF-8"); }
catch (IOException e)
{
e.printStackTrace();
return null;
}
return jsonString;
}
You can access json data by iterating through the json object below. JSONObject obj = new JSONObject(getJsonStringFronRaw("cars.json"))
Retrieving resource from the raw directory can be achieved with openRawResources a sub class of getResources, this takes in a resource id (your json file) and returns a InputStream.
InputStream myInput=getResources().openRawResource(R.raw.your_json);
If you need more info, here is the google documention

byte[] InputStream converted to String

This is my case: I'm using a library for reading files from a respository (I can't modify that library), the library has a method getContent that returns a String (it uses BasicResponseHandler to convert the response to String), but the repository also contains binary files too, and I need bytes[] to save that as a file. I tried using
content.getBytes("UTF-8") and it works with text files, but with other files like images, I get a corrupted file.
BasicResponseHandler uses this to convert the input to String (charset is UTF-8):
Reader reader = new InputStreamReader(instream, charset);
CharArrayBuffer buffer = new CharArrayBuffer(i);
try {
char[] tmp = new char[1024];
int l;
while((l = reader.read(tmp)) != -1) {
buffer.append(tmp, 0, l);
}
} finally {
reader.close();
}
return buffer.toString();
Does anyone know what I can do?
When you read an image, that isn't a String, and shouldn't be converted. Simply write the byte[]'s back out to file, and you'll have an image stored in said file.
If you aren't able to edit the library code being used, I would suggest looking for a new library to use. Perhaps one that doesn't assume anything about the file content type.

Categories

Resources