h2 close connection after Script.execute() - java

I have an h2 database, I want generate the sql script and just after that rename this database.
I do something like that
Script.execute("jdbc:h2:" + directory + File.separatorChar + dbName, user, password, file);
Date d = new Date();
Timestamp t = new Timestamp(d.getTime());
File oldfile = new File(directory + File.separatorChar + dbName + ".h2.db");
File newfile = new File(directory + File.separatorChar + t.getTime() + "backup.h2.db");
oldfile.renameTo(newfile)
but rename failed ! I think it's cause Script open my database but how I can close it and rename my file just after ?
Thank's
Solution :
I just add after the Script command, thank's for your help
Connection conn = DriverManager.getConnection("jdbc:h2:" + directory + File.separatorChar + dbName, user, password);
conn.prepareStatement("SHUTDOWN DEFRAG;").execute();
conn.close();

Related

Apache VFS Finds SFPT Server But Does Not Download File ("does not exist")

I am trying to download a file in an sftp server via Apache VFS. The VFS finds the server itself, but it does not find the file I want to download:
Could not copy "sftp://host/mnt/myFiles/Register/2022.log" because it does not exist.
The VFS finds my local file to which I want to copy, and the server itself, but not the file in the server.
What am I missing? I understand there are some similar questions on StackOverflow, but I have tried to apply their answers (use SftpFileSystemConfigBuilder.getInstance( ).setUserDirIsRoot(opts,true);, create several managers...) and so far the issue has remained.
Prints:
local found? true! file:///C:/Users/Chaddington/Desktop/MyProject/output/2022.log
base found? true! sftp://host/
SFTP found? false! sftp://host/mnt/myFiles/Register/2022.log
Exception:
org.apache.commons.vfs2.FileSystemException: Could not copy "sftp://host/mnt/myFiles/Register/2022.log" because it does not exist.
My code:
// Relevant variables
String localPath = "C:/Users/Chaddington/Desktop/MyProject/output/2022.log";
String remoteUrl = "sftp://host";
String remotePathBase = "sftp://host"
String remotePathRelative = "mnt/myFiles/Register/2022.log";
String myUsr = "username";
String myPass = "password";
// Makes sure there is a file to copy to in my local directory
File fileCopy = new File(localPath);
if (fileCopy.exists())
fileCopy.delete();
fileCopy.createNewFile();
// Sets up VFS
StandardFileSystemManager manager = (StandardFileSystemManager) VFS.getManager();
StaticUserAuthenticator auth = new StaticUserAuthenticator(remoteUrl, myUsr, myPass);
FileSystemOptions opts = new FileSystemOptions();
DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator(opts, auth);
SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, true);
// Tries to find relevant files
FileObject localFile = manager.resolveFile(localPath);
System.out.println("local found? " + localFile.exists() + "! " + localFile.getPublicURIString());
FileObject baseFile = manager.resolveFile(remotePathBase, opts);
System.out.println("base found? " + baseFile.exists() + "! " + baseFile.getPublicURIString());
manager.setBaseFile(baseFile);
FileObject remoteFile = manager.resolveFile(remotePathRelative, opts);
System.out.println("SFTP found? " + remoteFile.exists() + "! " + remoteFile.getPublicURIString());
localFile.copyFrom(remoteFile, Selectors.SELECT_SELF);
I am using:
java 8
apache vfs2 2.6.0
apache commons logging 1.2
jsch 0.1.55

CSV file from HDFS to Oracle BLOB using Spark

I'm working on Java app that uses Spark 2.3.1 to load data from Oracle to HDFS and vice versa.
I want to create CSV file in HDFS and then load it to Oracle (12.2) BLOB.
The code..
//create Dataset
Dataset<Row> dataset = SparkService.sql("select * from test_table");
String trgtFileWithPath = "/tmp/test_table.csv";
//save file in HDFS
dataset.write().mode("overwrite").format("csv").save(trgtFileWithPath);
//get file from HDFS
JavaSparkContext jsc = SparkContextUtil.getJavaSparkContext("appId");
JavaRDD<String> textFile = jsc.textFile(trgtFileWithPath);
//Call Oracle package, that inserts into table with BLOB field
File csvFile = new File("/tmp/ETLFramework/test_table1.csv");
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(csvFile), 500);
Connection conn = tbl.getJdbcConnection(); //there is tbl var with java.sql.Connection
CallableStatement cstmt = conn.prepareCall(String.format("{call %s(?, ?, ?)}", "ORACLE_API_FOR_ETL_FRAMEWORK.INSERT_LOB"));
cstmt.setString(1, "FILE_TO_LOB");
cstmt.setString(2, "/tmp/test_table.csv");
cstmt.setClob(3, bis, (int) csvFile.length());
cstmt.execute();
if (!conn.getAutoCommit()) {
conn.commit();
}
I'm new to Spark.. so any ideas please how to convert JavaRDD to BufferedInputStream, or get rid of mess above and put Dataset to Oracle BLOB in more sane way..
Thanks
Finally.. after couple days of fighting with Oracle, Hadoop and Spark, I found solution for my task:
try {
String trgtFolderPath = "tmp/ETLFramework/csv/form_name";
Configuration conf = new Configuration();
String hdfsUri = "hdfs://" + /*nameNode*/ + ":" + /*hdfsPort*/;
FileSystem fileSystem = FileSystem.get(URI.create(hdfsUri), conf);
RemoteIterator<LocatedFileStatus> fileStatusListIterator = fileSystem.listFiles(new Path(trgtFolderPath), true);
while(fileStatusListIterator.hasNext()){
LocatedFileStatus fileStatus = fileStatusListIterator.next();
String fileName = fileStatus.getPath().getName();
if (fileName.contains(".csv") && fileStatus.getLen()>0) {
log.info("fileName=" + fileName);
log.info("fileStatus.getLen=" + fileStatus.getLen());
BufferedInputStream bis = new BufferedInputStream(fileSystem.open(new Path(trgtFolderPath + "/" + fileName)), 500);
ETLParams param = ETLParams.getParams();
Connection conn = tbl.getJdbcConnection();
String apiPackageInsertLOB = ETLService.replaceParams(tbl.getConnection().getFullSchema() + "." + tbl.getApiPackage().getDbTableApiPackageInsertLOB(), param.getParamsByName());
log.info(String.format("Call %s(%s, %s, %s);", apiPackageInsertLOB, tbl.getFullTableName(), trgtFolderPath + "/" + fileName, "p_nInsertedRows"));
CallableStatement cstmt = conn.prepareCall(String.format("{call %s(?, ?, ?, ?)}", apiPackageInsertLOB));
cstmt.setString(1, tbl.getFullTableName());
cstmt.setString(2, trgtFolderPath + "/" + fileName);
cstmt.setBlob(3, bis, fileStatus.getLen());
cstmt.registerOutParameter(4, Types.INTEGER);
cstmt.execute();
int rowsInsertedCount = cstmt.getInt(3);
log.info("Inserted " + rowsInsertedCount + " rows into table blob_file");
cstmt.close();
}
}
fileSystem.close();
}
catch (IOException |
SQLException exc){
exc.printStackTrace();
}
Writing of 2 Gb CSV from Spark Dataset into HDFS, and following reading of this CSV from HDFS into Oracle BLOB took about 5 minutes..

Issue in Taking Mysql database backup from java

OK i know there are lots of questions and articles related to it,and after following them and playing with them still i can't able to succed.Here is my code
import java.io.File;
import java.io.IOException;
import java.net.URISyntaxException;
import java.security.CodeSource;
import javax.swing.JOptionPane;
public class BackupData
{
public static void main(String[] args) {
try
{
/*NOTE: Getting path to the Jar file being executed*/
/*NOTE: YourImplementingClass-> replace with the class executing the code*/
CodeSource codeSource = BackupData.class.getProtectionDomain().getCodeSource();
File jarFile = new File(codeSource.getLocation().toURI().getPath());
String jarDir = jarFile.getParentFile().getPath();
System.out.println("jarDir"+ jarDir);
/*NOTE: Creating Database Constraints*/
String dbName = "xyz";
String dbUser = "root";
String dbPass = "root";
/*NOTE: Creating Path Constraints for folder saving*/
/*NOTE: Here the backup folder is created for saving inside it*/
String folderPath = jarDir + "\\backup";
/*NOTE: Creating Folder if it does not exist*/
File f1 = new File(folderPath);
System.out.println("f1" + f1);
f1.mkdir();
/*NOTE: Creating Path Constraints for backup saving*/
/*NOTE: Here the backup is saved in a folder called backup with the name backup.sql*/
String savePath = "\"" + jarDir + "\\backup\\" + "1.sql\"";
System.out.println("savepath" + savePath);
/*NOTE: Used to create a cmd command*/
String executeCmd = "C:\\Program Files\\MySQL\\MySQL Workbench 6.3 CE\\mysqldump -u " + dbUser + " -p " + dbPass + " --database " + dbName + " -r " + savePath;
/*NOTE: Executing the command here*/
Process runtimeProcess = Runtime.getRuntime().exec(executeCmd);
int processComplete = runtimeProcess.waitFor();
/*NOTE: processComplete=0 if correctly executed, will contain other values if not*/
if (processComplete == 0)
{
System.out.println("Backup Complete");
}
else
{
System.out.println("Backup Failure");
System.out.println(processComplete);
}
}
catch (URISyntaxException | IOException | InterruptedException ex)
{
JOptionPane.showMessageDialog(null, "Error at Backuprestore" + ex.getMessage());
}
}
}
And the output this code is giving - Backup Failure,2(Process Complete Value)
I just can't understand what am i doing wrong?am i missing something?
I just can't able to figure out that what the problem is,any help will be appreciated,Thanks.
Why do you do all this? There is a command line utility called mysqldump for this purpose.
The mysqldump client utility performs logical backups, producing a set
of SQL statements that can be executed to reproduce the original
database object definitions and table data. It dumps one or more MySQL
databases for backup or transfer to another SQL server. The mysqldump
command can also generate output in CSV, other delimited text, or XML
format. Also you can find the following links useful from mysql
manual.
https://dev.mysql.com/doc/refman/5.7/en/backup-methods.html
https://dev.mysql.com/doc/refman/5.7/en/copying-databases.html

Allow users to download only once

A web app, the client side is jsp and backend is JAVA, DB is simple sqlite.
In the DB there is a table that contains "files" called Reports, and users are allowed to download each file in the DB only ONCE "Due to security requirements", and I have been trying to find a way to do that.
Is there anyway I can write a jsp code that allows the users to download the requested file once from the DB?
I don't know if it is useful but this is the JAVA piece of code that is used to download from the DB.
String sql = "SELECT file, filename FROM reports INNER JOIN download USING(tipid) WHERE reports.tipid = ?"+
"AND download.ts_" + ae_num+ " = 0;";
PreparedStatement stmt = c.prepareStatement(sql);
String tipNum = request.getParameter("tipid");
if (tipNum != null) {
stmt.setString(1, tipNum);
//stmt.setString(2, tipNum);
ResultSet res = stmt.executeQuery();
BufferedInputStream fileBlob = null;
String filename = "";
while (res.next()) {
fileBlob = new BufferedInputStream(res.getBinaryStream("file"), DEFAULT_BUFFER_SIZE);
filename = res.getString("filename");
}
if (fileBlob != null) {
System.out.println(filename);
response.setContentType("APPLICATION/OCTET-STREAM");
response.setHeader("Content-Disposition", "attachment; filename=\"" + filename + "\"");
BufferedOutputStream output = new BufferedOutputStream(response.getOutputStream(),
DEFAULT_BUFFER_SIZE);
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
int length;
while ((length = fileBlob.read(buffer)) > 0) {
output.write(buffer, 0, length);
}
output.close();
fileBlob.close();
Date now = new Date();
sql = "UPDATE download SET ts_" + ae_num + " = " + now.getTime() + " WHERE tipid = ?;";
System.out.println(sql);
stmt = c.prepareStatement(sql);
stmt.setString(1, tipNum);
stmt.executeUpdate();
stmt.close();
c.commit();
c.close();
The current problem I'm having is that whenever a user is trying to download the requested file and whether the user chose to open/save or cancel, it will be counted as a downloaded file, even with a cancel.
Any ideas? Would using cookies in JSP help to implement that? If so can someone guide me? Or how to solve the download count issue
If you have control over the database, you could possibly have another table with the user_id, file_id, download_status column. That way you could always have the records who downloaded the file.

Java is putting my file path twice

Hi i have made a 2 files that is suppose to copy file, but it makes the file url twice e.g. Users/Name/Users/Name/Desktop/jar.jar
Its adding the location of the runnable jar i opened to start then my path i want.
Code:
String path1 = System.getProperty("user.dir") + File.separator + "Desktop" + File.separator + "Coding" + File.separator + "Temp";
File file = new File(path1);
String path2 = System.getProperty("user.dir") + File.separator + "Library" + File.separator + "LaunchAgents" + File.separator + "program.jar";
File file2 = new File(path2);
if(file2.exists()) {
logger.warning("File 3 def");
return;
}
File file4 = new File(file.getAbsolutePath() + File.separator + "copied.jar");
if(!file4.exists()) {
logger.warning("cp " + file4.getAbsolutePath() + " : " + file2.getAbsolutePath());
logger.warning("File 4 def");
return;
}
Log:
WARNING: cp /Users/myuser/Desktop/Coding/Temp/Desktop/Coding/Temp/program.jar : /Users/myuser/Desktop/Coding/Temp/Library/LaunchAgents/copied.jar
WARNING: File 4 def
System.getProperty("user.dir") gets the current working directory. See documentation.
Perhaps you meant System.getProperty("user.home"), which gets your home directory.

Categories

Resources