I'm working on Java app that uses Spark 2.3.1 to load data from Oracle to HDFS and vice versa.
I want to create CSV file in HDFS and then load it to Oracle (12.2) BLOB.
The code..
//create Dataset
Dataset<Row> dataset = SparkService.sql("select * from test_table");
String trgtFileWithPath = "/tmp/test_table.csv";
//save file in HDFS
dataset.write().mode("overwrite").format("csv").save(trgtFileWithPath);
//get file from HDFS
JavaSparkContext jsc = SparkContextUtil.getJavaSparkContext("appId");
JavaRDD<String> textFile = jsc.textFile(trgtFileWithPath);
//Call Oracle package, that inserts into table with BLOB field
File csvFile = new File("/tmp/ETLFramework/test_table1.csv");
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(csvFile), 500);
Connection conn = tbl.getJdbcConnection(); //there is tbl var with java.sql.Connection
CallableStatement cstmt = conn.prepareCall(String.format("{call %s(?, ?, ?)}", "ORACLE_API_FOR_ETL_FRAMEWORK.INSERT_LOB"));
cstmt.setString(1, "FILE_TO_LOB");
cstmt.setString(2, "/tmp/test_table.csv");
cstmt.setClob(3, bis, (int) csvFile.length());
cstmt.execute();
if (!conn.getAutoCommit()) {
conn.commit();
}
I'm new to Spark.. so any ideas please how to convert JavaRDD to BufferedInputStream, or get rid of mess above and put Dataset to Oracle BLOB in more sane way..
Thanks
Finally.. after couple days of fighting with Oracle, Hadoop and Spark, I found solution for my task:
try {
String trgtFolderPath = "tmp/ETLFramework/csv/form_name";
Configuration conf = new Configuration();
String hdfsUri = "hdfs://" + /*nameNode*/ + ":" + /*hdfsPort*/;
FileSystem fileSystem = FileSystem.get(URI.create(hdfsUri), conf);
RemoteIterator<LocatedFileStatus> fileStatusListIterator = fileSystem.listFiles(new Path(trgtFolderPath), true);
while(fileStatusListIterator.hasNext()){
LocatedFileStatus fileStatus = fileStatusListIterator.next();
String fileName = fileStatus.getPath().getName();
if (fileName.contains(".csv") && fileStatus.getLen()>0) {
log.info("fileName=" + fileName);
log.info("fileStatus.getLen=" + fileStatus.getLen());
BufferedInputStream bis = new BufferedInputStream(fileSystem.open(new Path(trgtFolderPath + "/" + fileName)), 500);
ETLParams param = ETLParams.getParams();
Connection conn = tbl.getJdbcConnection();
String apiPackageInsertLOB = ETLService.replaceParams(tbl.getConnection().getFullSchema() + "." + tbl.getApiPackage().getDbTableApiPackageInsertLOB(), param.getParamsByName());
log.info(String.format("Call %s(%s, %s, %s);", apiPackageInsertLOB, tbl.getFullTableName(), trgtFolderPath + "/" + fileName, "p_nInsertedRows"));
CallableStatement cstmt = conn.prepareCall(String.format("{call %s(?, ?, ?, ?)}", apiPackageInsertLOB));
cstmt.setString(1, tbl.getFullTableName());
cstmt.setString(2, trgtFolderPath + "/" + fileName);
cstmt.setBlob(3, bis, fileStatus.getLen());
cstmt.registerOutParameter(4, Types.INTEGER);
cstmt.execute();
int rowsInsertedCount = cstmt.getInt(3);
log.info("Inserted " + rowsInsertedCount + " rows into table blob_file");
cstmt.close();
}
}
fileSystem.close();
}
catch (IOException |
SQLException exc){
exc.printStackTrace();
}
Writing of 2 Gb CSV from Spark Dataset into HDFS, and following reading of this CSV from HDFS into Oracle BLOB took about 5 minutes..
Related
A web app, the client side is jsp and backend is JAVA, DB is simple sqlite.
In the DB there is a table that contains "files" called Reports, and users are allowed to download each file in the DB only ONCE "Due to security requirements", and I have been trying to find a way to do that.
Is there anyway I can write a jsp code that allows the users to download the requested file once from the DB?
I don't know if it is useful but this is the JAVA piece of code that is used to download from the DB.
String sql = "SELECT file, filename FROM reports INNER JOIN download USING(tipid) WHERE reports.tipid = ?"+
"AND download.ts_" + ae_num+ " = 0;";
PreparedStatement stmt = c.prepareStatement(sql);
String tipNum = request.getParameter("tipid");
if (tipNum != null) {
stmt.setString(1, tipNum);
//stmt.setString(2, tipNum);
ResultSet res = stmt.executeQuery();
BufferedInputStream fileBlob = null;
String filename = "";
while (res.next()) {
fileBlob = new BufferedInputStream(res.getBinaryStream("file"), DEFAULT_BUFFER_SIZE);
filename = res.getString("filename");
}
if (fileBlob != null) {
System.out.println(filename);
response.setContentType("APPLICATION/OCTET-STREAM");
response.setHeader("Content-Disposition", "attachment; filename=\"" + filename + "\"");
BufferedOutputStream output = new BufferedOutputStream(response.getOutputStream(),
DEFAULT_BUFFER_SIZE);
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
int length;
while ((length = fileBlob.read(buffer)) > 0) {
output.write(buffer, 0, length);
}
output.close();
fileBlob.close();
Date now = new Date();
sql = "UPDATE download SET ts_" + ae_num + " = " + now.getTime() + " WHERE tipid = ?;";
System.out.println(sql);
stmt = c.prepareStatement(sql);
stmt.setString(1, tipNum);
stmt.executeUpdate();
stmt.close();
c.commit();
c.close();
The current problem I'm having is that whenever a user is trying to download the requested file and whether the user chose to open/save or cancel, it will be counted as a downloaded file, even with a cancel.
Any ideas? Would using cookies in JSP help to implement that? If so can someone guide me? Or how to solve the download count issue
If you have control over the database, you could possibly have another table with the user_id, file_id, download_status column. That way you could always have the records who downloaded the file.
I'm trying to save an image in Postgresql Database but unable to do that I'm trying to call a function in which I need to Pass image in bytea code.
function to store Image is
CREATE OR REPLACE FUNCTION products_update_image(product_id character varying, img bytea)
RETURNS void AS
'BEGIN UPDATE PRODUCTS SET IMAGE=img::bytea WHERE ID=product_id; END;'
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION products_update_image(character varying, bytea)
OWNER TO postgres;
Answer is very simple for this question. Recently I also worked on this and faced same issue that you are facing, you can use below code.
// Save an image from server to physical location
String destinationFile = "C:\\Program Files (x86)\\openbravopos-2.30.2\\image1.jpg";
// This will call a function which will save an image on your given Location
saveImage(image, destinationFile);
// You don't need to call procedure here just pass this query
PreparedStatement pstmt = con.prepareStatement("UPDATE PRODUCTS SET IMAGE = ? WHERE ID = ?");
// Location of image with it's name
File file = new File("Location\\image1.jpg");
FileInputStream in = new FileInputStream(file);
try
{
pstmt.setBinaryStream(1, in, (int) file.length());
pstmt.setString(2, id);
pstmt.executeUpdate();
}
catch (Exception ee)
{
System.out.println("Exception is:- " + ee);
}
// Function that saved Image on local Location
public static void saveImage(String imageUrl, String destinationFile) throws IOException
{
URL url = new URL(imageUrl);
InputStream is = url.openStream();
OutputStream os = new FileOutputStream(destinationFile);
byte[] b = new byte[2048];
int length;
while ((length = is.read(b)) != -1)
{
os.write(b, 0, length);
}
is.close();
os.close();
}
I hope this will work for you as this worked for me
I am working on java project. In this I have done the code to convert ods file to html and then open that converted file in browser.
Below is my code :-
private String ConvertOdtAndOdsToHTML(String sDocPath, String finalDestinationPath, String downloadImagePath, String returnPath) {
try {
String root = finalDestinationPath;
int lastIndex = sDocPath.lastIndexOf(File.separator);
String sFileName = sDocPath.substring(lastIndex + 1, sDocPath.length());
String fileNameWithOutExt = FilenameUtils.removeExtension(sFileName);
String fileOutName = root + File.separator + fileNameWithOutExt + ".html";
InputStream in = new FileInputStream(new File(sDocPath));
String ext = FilenameUtils.getExtension(sDocPath);
OdfTextDocument document = null;
OdfSpreadsheetDocument odfSpreadsheetDocument = null;
if (ext.equalsIgnoreCase("odt")) {
document = OdfTextDocument.loadDocument(in);
}
else if (ext.equalsIgnoreCase("ods")) {
odfSpreadsheetDocument = OdfSpreadsheetDocument.loadDocument(in);
}
org.odftoolkit.odfdom.converter.xhtml.XHTMLOptions options = org.odftoolkit.odfdom.converter.xhtml.XHTMLOptions.create();
// Extract image
String sLocalSystemImagePath = finalDestinationPath + File.separator+"images"+File.separator;
File imageFolder = new File(sLocalSystemImagePath);
options.setExtractor( new org.openskye.resource.FileImageExtractor( imageFolder ) );
// URI resolver
String localHostImagePath = downloadImagePath + File.separator+"images"+File.separator;
FileResolverOdt fileURIResolver = new FileResolverOdt(new File(localHostImagePath));
options.URIResolver(fileURIResolver);
// URI resolver
OutputStream out = new FileOutputStream(new File(fileOutName));
if (ext.equalsIgnoreCase("odt")) {
org.odftoolkit.odfdom.converter.xhtml.XHTMLConverter.getInstance().convert(document, out, options);
}
else if (ext.equalsIgnoreCase("ods")) {
org.odftoolkit.odfdom.converter.xhtml.XHTMLConverter.getInstance().convert(odfSpreadsheetDocument, out, options);
}
return returnPath + File.separator + fileNameWithOutExt + ".html";
} catch (Exception e) {
return "Failed to open the document: " + sDocPath + " due to error:" + e.getMessage();
}
}
I am able to convert ods file to html and file is getting open in browser also but it is displaying some question marks ???? and date and time in browser.
I just want to display the file as it gets open in open office without displaying date time and any special character. Also I want to differentiate the sheets.
Here is ods file which I want to convert Link
Below is attached screenshot for the ods file that is converted to html
I would be thankful if anyone could help me in finding the solution.
I have an h2 database, I want generate the sql script and just after that rename this database.
I do something like that
Script.execute("jdbc:h2:" + directory + File.separatorChar + dbName, user, password, file);
Date d = new Date();
Timestamp t = new Timestamp(d.getTime());
File oldfile = new File(directory + File.separatorChar + dbName + ".h2.db");
File newfile = new File(directory + File.separatorChar + t.getTime() + "backup.h2.db");
oldfile.renameTo(newfile)
but rename failed ! I think it's cause Script open my database but how I can close it and rename my file just after ?
Thank's
Solution :
I just add after the Script command, thank's for your help
Connection conn = DriverManager.getConnection("jdbc:h2:" + directory + File.separatorChar + dbName, user, password);
conn.prepareStatement("SHUTDOWN DEFRAG;").execute();
conn.close();
I am trying to download video file stored as blob in Mysql. The file gets downloaded fine but it gets corrupted i guess. The formats to downloaded are ogg, webm n mp3. The problem is that wen i try to convert any video using ffmpeg, it says "invalid data found during processing".
I am using the following code
Blob image = null;
Connection con = null;
ResultSet rs = null;
try{
Class.forName("com.mysql.jdbc.Driver");
con = MySqlConnect.getDBConnection();
String sql = "select videos, filename from sun.videostore where id ='"+fileID+"'";
Statement stmt = con.createStatement();
rs = stmt.executeQuery(sql);
while (rs.next()) {
String filename = rs.getString("filename");
Blob video = rs.getBlob("videos");
File file1 = new File("C:\\DowloadFile\\" + filename);
FileOutputStream foStream = new FileOutputStream(file1);
if( video != null){
int length = (int) video.length();
InputStream is2 = video.getBinaryStream();
int b = 0;
while(b!=-1){
b=is2.read();
foStream.write(b);
}
}
}
}catch(Exception e){
System.out.println("Ecxeption in getting data from DB = "+e);
}'
No i have checked my uploaded file, it is not corrupted....I have tried a different way to upload the video into DB.
File video = new File(filename);
fis = new FileInputStream(video);
ps.setBinaryStream(2, fis, (int) video.length());//preparedStatement
If i upload the file in above way, i get a correct file on downloading.