Im getting an image from server as InputStream and then saving it to mySQL database. It works when I use Thread.sleep(5000);. But if I dont use it no picture is saved to the DB or only one picture and half of it or less. So I understand that the program needs time writing image to the database, but how much time? This is the question, I would like to know exactly when it finished writing image to the database and can start with the next image. Below is my code:
ResultSet rs = stmt.executeQuery(query);
while (rs.next()) {
int ID = rs.getInt(1);
String myName = rs.getString(2);
try {
String myCommand = "take picture and save /mydir/mydir2/mydir3" + myName + ".png";
telnet.sendCommand(myCommand); // Here taking a picture via telnet
// Thread.sleep(5000);// If I uncomment this line it works
String sqlCommand = "UPDATE my_table SET Picture = ? WHERE ID ='" + ID +"';";
PreparedStatement statement = conn.prepareStatement(sqlCommand);
String ftpUrl = "ftp://"+server_IP+"/mydir/mydir2/mydir3" + myName + ".png;type=i";
URL url = new URL(ftpUrl);
URLConnection connUrl = url.openConnection();
//Thread.sleep(5000); // If I uncomment this line, it works too.
InputStream inputStreamTelnet = connUrl.getInputStream();
statement.setBlob(1, inputStreamTelnet);
int row = statement.executeUpdate();
if (row > 0) {
System.out.println("A picture was inserted into DB.");
System.out.println("Value of row(s) : " + row);
}
} catch (Exception e) {
e.printStackTrace();
}
} // End of while
I would expect to put the waiting(sleep) after InputStream inputStreamTelnet = connUrl.getInputStream(); but it doesnt work when I put the sleep after this line. It works only when the sleep is before. Could someone explain me why and I would like to avoid using Thread.sleep(5000); and instead would like to wait exact time or not wait at all which will make the program faster also there might be a case saving the picture can take more than 5 seconds or maybe saving the picture doesnt take time but opening the url connection. There are 2 sleep lines on the code when I uncomment one of them the program works(saves the images to mysql DB successfully). I also verified on the server that the images exist but in the end I dont see them in the mysql DB.
UPDATE : I removed the try block and telnet stuff now it works without waiting but I really need the telnet stuff...
UPDATE 2: After inspecting my telnet class found out that I forgot to apply a change I made to single line... now it works without wait!
Huh, I tested my code on JDK 1.7.0_67 / PostgreSQL 9.2 and it works well:
public class ImageLoader {
private static final int START_IMAGE_ID = 1;
private static final int END_IMAGE_ID = 1000;
private static final String IMAGE_URL = "http://savepic.net/%d.jpg";
public static void main(String[] args) throws SQLException, IOException {
Connection connection = DriverManager.getConnection("jdbc:postgresql://localhost:5432/test", "username", "password");
PreparedStatement imageStatement = connection.prepareStatement("INSERT INTO public.image VALUES(?, ?)");
for (int i = START_IMAGE_ID; i <= END_IMAGE_ID; i++) {
String imageUrl = String.format(IMAGE_URL, i);
URL url = new URL(imageUrl);
URLConnection urlConnection = url.openConnection();
imageStatement.setLong(1, i);
imageStatement.setBytes(2, read(urlConnection.getInputStream()));
int count = imageStatement.executeUpdate();
if (count != 1) {
throw new IllegalStateException("Image with ID = " + i + " not inserted");
} else {
System.out.println("Image (" + imageUrl + ") saved to database");
}
}
imageStatement.close();
connection.close();
}
private static byte[] read(InputStream inputStream) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(1 << 15); // assume image average size ~ 32 Kbytes
BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);
byte[] buffer = new byte[1 << 10];
int read = -1;
while ((read = bufferedInputStream.read(buffer)) != -1) {
byteArrayOutputStream.write(buffer, 0, read);
}
return byteArrayOutputStream.toByteArray();
}
}
Related
I have a program who read information from database. Sometimes, the message is bigger that expected. So, before send that message to my broker I zip it with this code:
public static byte[] zipBytes(byte[] input) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(input.length);
OutputStream ej = new DeflaterOutputStream(bos);
ej.write(input);
ej.close();
bos.close();
return bos.toByteArray();
}
Recently, i retrieved a 80M message from DB and when execute my code above just OutOfMemory throw on the "ByteArrayOutputStream" line. My java program just have 512 of memory to do all process and cant give it more memory.
How can I solve this?
This is no a duplicate question. I cant increase heap size memory.
EDIT:
This is flow of my java code:
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()){
byte[] bt = rs.getBytes("data");
if(bt.length > 60) { //only rows with content > 60. I need to know the size of message, so if I use rs.getBinaryStream I cannot know the size, can I?
if(bt.length >= 10000000){
//here I need to zip the bytearray before send it, so
bt = zipBytes(bt); //this method is above
// then publish bt to my broker
} else {
//here publish byte array to my broker
}
}
}
EDIT
Ive tried with PipedInputStream and the memory that process consume is same as zipBytes(byte[] input) method.
private InputStream zipInputStream(InputStream in) throws IOException {
PipedInputStream zipped = new PipedInputStream();
PipedOutputStream pipe = new PipedOutputStream(zipped);
new Thread(
() -> {
try(OutputStream zipper = new DeflaterOutputStream(pipe)){
IOUtils.copy(in, zipper);
zipper.flush();
} catch (IOException e) {
IOUtils.closeQuietly(zipped); // close it on error case only
e.printStackTrace();
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(zipped);
IOUtils.closeQuietly(pipe);
}
}
).start();
return zipped;
}
How can I compress by Deflate my InputStream?
EDIT
That information is sent to JMS in Universal Messaging Server by Software AG. This use a Nirvana client documentation: https://documentation.softwareag.com/onlinehelp/Rohan/num10-2/10-2_UM_webhelp/um-webhelp/Doc/java/classcom_1_1pcbsys_1_1nirvana_1_1client_1_1n_consume_event.html and data is saved in nConsumeEvent objects and the documentation show us only 2 ways to send that information:
nConsumeEvent (String tag, byte[] data)
nConsumeEvent (String tag, Document adom)
https://documentation.softwareag.com/onlinehelp/Rohan/num10-5/10-5_UM_webhelp/index.html#page/um-webhelp%2Fco-publish_3.html%23
The code for connection is:
nSessionAttributes nsa = new nSessionAttributes("nsp://127.0.0.1:9000");
MyReconnectHandler rhandler = new MyReconnectHandler();
nSession mySession = nSessionFactory.create(nsa, rhandler);
if(!mySession.isConnected()){
mySession.init();
}
nChannelAttributes chaAtt = new nChannelAttributes();
chaAttr.setName("mychannel"); //This is a topic
nChannel myChannel = mySession.findChannel(chaAtt);
List<nConsumeEvent> messages = new ArrayList<nConsumeEvent>();
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo");
while(rs.next){
byte[] bt = rs.getBytes("data");
if(bt.length > 60){
nEventProperties prop = new nEventProperties();
if(bt.length > 10000000){
bt = compressData(bt); //here a need compress data without ByteArrayInputStream
prop.put("isZip", "true");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
} else {
prop.put("isZip", "false");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
ntransactionAttributes tatrib = new nTransactionAttributes(myChannel);
nTransaction myTransaction = nTransactionFactory.create(tattrib);
Vector<nConsumeEvent> m = new Vector<nConsumeEvent>(messages);
myTransaction.publish(m);
myTransaction.commit();
}
Because API exection, to the end of the day I need send the information in byte array, but if this is the only one byte array in my code is OK. How can I compress the byte array or InputStream with rs.getBinaryStream() in this implementation?
EDIT
The database server used is PostgreSQL v11.6
EDIT
I've applied the first one solution of #VGR and works fine.
Only one thing, SELECT query is in a while(true) like:
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
// all that implementation you know for this entire post
Thread.sleep(10000);
}
So, a SELECT is execute each 3 second.
But I've done a test with my program running and the memory just increase in each process. Why? If the information that database return is same in each request, should not the memory keep like first request? Or maybe I forgot close a stream?
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()) {
//byte[] bt = rs.getBytes("data");
byte[] bt;
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(this.zip+1);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= this.zip);
break;
}
}
if (sendToBroker) {
nEventProperties prop = new nEventProperties();
// Rewind stream
source.reset();
if (needCompression) {
System.out.println("Tamaño del mensaje mayor al esperado. Comprimiendo mensaje");
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
IOUtils.copy(source, brokerStream);
}
bt = byteStream.toByteArray();
prop.put("zip", "true");
} else {
bt = IOUtils.toByteArray(source);
}
System.out.println("size: "+bt.length);
prop.put("host", this.host);
nConsumeEvent ncon = new nConsumeEvent(""+rs.getInt("xid"), bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
}
}
For example, this is the heap memory in two times. First one memory use above 500MB and second one (with the same information of database) used above 1000MB
rs.getBytes("data") reads the entire 80 megabytes into memory at once. In general, if you are reading a potentially large amount of data, you don’t want to try to keep it all in memory.
The solution is to use getBinaryStream instead.
Since you need to know whether the total size is larger than 10,000,000 bytes, you have two choices:
Use a BufferedInputStream with a buffer of at least that size, which will allow you to use mark and reset in order to “rewind” the InputStream.
Read the data size as part of your query. You may be able to do this by using a Blob or using a function like LENGTH.
The first approach will use up 10 megabytes of program memory for the buffer, but that’s better than hundreds of megabytes:
while (rs.next()) {
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(10_000_001);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= 10_000_000);
break;
}
}
if (sendToBroker) {
// Rewind stream
source.reset();
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Notice that no byte arrays and no ByteArrayOutputStreams are used. The actual data is not kept in memory, except for the 10 megabyte buffer.
The second approach is shorter, but I’m not sure how portable it is across databases:
while (rs.next()) {
Blob data = rs.getBlob("data");
long length = data.length();
if (length > 60) {
try (InputStream source = data.getBinaryStream()) {
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Both approaches assume there is some API available for your “broker” which allows the data to be written to an OutputStream. I’ve assumed, for the sake of example, that it’s broker.getOutputStream().
Update
It appears you are required to create nConsumeEvent objects, and that class only allows its data to be specified as a byte array in its constructors.
That byte array is unavoidable, obviously. And since there is no way to know the exact number of bytes a compressed version will require, a ByteArrayOutputStream is also unavoidable. (It’s possible to avoid using that class, but the replacement would be essentially a reimplementation of ByteArrayOutputStream.)
But you can still read the data as an InputStream in order to reduce your memory usage. And when you aren’t compressing, you can still avoid ByteArrayOutputStream, thereby creating only one additional byte array.
So, instead of this, which is not possible for nConsumeEvent:
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
It should be:
byte[] bt;
if (needCompression) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
} else {
bt = source.readAllBytes();
}
prop.put("isZip", String.valueOf(needCompression));
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Similarly, the second example should replace this:
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
with this:
byte[] bt;
if (length >= 10000000) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
prop.put("isZip", "true");
} else {
bt = source.readAllBytes();
prop.put("isZip", "false");
}
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
So i have database with value like this...
i'm trying to append the value by using insert into without replacing it,the data from this txt file...
but when i reload/refresh the database there is no new data being appended into the database...,
here is my code....
public static void importDatabase(String fileData){
try{
File database = new File(fileData);
FileReader fileInput = new FileReader(database);
BufferedReader in = new BufferedReader(fileInput);
String line = in.readLine();
line = in.readLine();
String[] data;
while (line != null){
data = line.split(",");
int ID = Integer.parseInt(data[0]);
String Nama = data[1];
int Gaji = Integer.parseInt(data[2]);
int Absensi = Integer.parseInt(data[3]);
int cuti = Integer.parseInt(data[4]);
String Status = data[5];
String query = "insert into list_karyawan values(?,?,?,?,?,?)";
ps = getConn().prepareStatement(query);
ps.setInt(1,ID);
ps.setString(2,Nama);
ps.setInt(3,Gaji);
ps.setInt(4,Absensi);
ps.setInt(5,cuti);
ps.setString(6,Status);
line = in.readLine();
}
ps.executeUpdate();
ps.close();
con.close();
System.out.println("Database Updated");
in.close();
}catch (Exception e){
System.out.println(e);
}
}
When i run it, it shows no error but the data never get into database, where did i go wrong?.,...
Auto-commit mode is enabled by default.
The JDBC driver throws a SQLException when a commit or rollback operation is performed on a connection that has auto-commit set to true.
Symptoms of the problem can be unexpected application behavior
update the JVM configuration for the ActiveMatrix BPM node to use the following Oracle connection property:
autoCommitSpecCompliant=false Try once
Note:I am not able to put as comment so i posted as a answer
I am making an application that displays ur saved browser password(s) (right now I'm using Google Chrome) in an easy way. Everytime I run this code I get an error at byte[] newbyte = Crypt32Util.cryptUnprotectData(mybyte);. The code used is written below. This code provides some context. I never had this problem and after some research I can't find a clear solution. I hope someone can help me with it.
Code:
Connection connection = null;
connection = DriverManager.getConnection("jdbc:sqlite:" + path_to_copied_db);
PreparedStatement statement = connection.prepareStatement("SELECT `origin_url`,`username_value`,`password_value` FROM `logins`");
ResultSet re = statement.executeQuery();
StringBuilder builder = new StringBuilder();
while (re.next()) {
String pass = "";
try {
byte[] mybyte = (byte[])re.getBytes("password_value");
byte[] newbyte = Crypt32Util.cryptUnprotectData(mybyte); //Error on this line:71
pass = new String(newbyte);
}catch(Win32Exception e){
e.printStackTrace();
}
builder.append(user + ": " + re.getString("origin_url") + " " + re.getString("username_value") + " " + re.getBinaryStream("password_value") + "\n");
}
Error:
com.sun.jna.platform.win32.Win32Exception: The parameter is incorrect.
at com.sun.jna.platform.win32.Crypt32Util.cryptUnprotectData(Crypt32Util.java:128)
at com.sun.jna.platform.win32.Crypt32Util.cryptUnprotectData(Crypt32Util.java:103)
at com.sun.jna.platform.win32.Crypt32Util.cryptUnprotectData(Crypt32Util.java:90)
at Client.Client.main(Client.java:71)
I am getting null value when I am reading the blob data from database. What might be the issue? Can some one help me on this?
Connection con = null;
PreparedStatement psStmt = null;
ResultSet rs = null;
try {
try {
Class.forName("oracle.jdbc.driver.OracleDriver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
con =
DriverManager.getConnection("jdbc:oracle:thin:#MyDatabase:1535:XE","password","password");
System.out.println("connection established"+con);
psStmt = con
.prepareStatement("Select Photo from Person where Firstname=?");
int i = 1;
psStmt.setLong(1, "Nani");
rs = null;
rs = psStmt.executeQuery();
InputStream inputStream = null;
while (rs.next()) {
inputStream = rs.getBinaryStream(1);
//Blob blob = rs.getBlob(1);
//Blob blob1 = (Blob)rs.getObject(1);
//System.out.println("blob length "+blob1);//rs.getString(1);
}
System.out.println("bytessssssss "+inputStream);//here i am getting null value.
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I believe you didn't use setString function to assign any value to firstname which leads to null
for example:
ps.preparedStatement("Select photo from person where firstname = ?");
ps.setString(1,"kick"); <----- add this line
system.out.println("bytes "+rs.getBinaryStream(1));
Another suggestions
there is no need to use rs = null; inside try catch block because you have rs=null; at beginning of
your code.
change
InputStream inputStream = null;
to
InputStream inputStream = new InputStream();
or
get rid of InputStream inputStream = null;
source you should take a look at
The most obvious error is using setLong instead of setString.
However one practice is fatal: declaring in advance. This in other languages is a good practice, but in java one should declare as close as possible.
This reduces scope, by which you would have found the error! Namely inputStream is called after a failed rs.next() - outside the loop. Maybe because no records were found.
This practice, declaring as near as feasible, also helps with try-with-resources which were used here to automatically close the statement and result set.
Connection con = null;
try {
Class.forName("oracle.jdbc.driver.OracleDriver");
con = DriverManager.getConnection(
"jdbc:oracle:thin:#MyDatabase:1535:XE","password","password");
System.out.println("connection established"+con);
try (PreparedStatement psStmt = con.prepareStatement(
"Select Photo from Person where Firstname=?")) {
int i = 1;
psStmt.setString(1, "Nani");
try (ResultSet rs = psStmt.executeQuery()) {
while (rs.next()) {
try (InputStream inputStream = rs.getBinaryStream(1)) {
//Blob blob = rs.getBlob(1);
//Blob blob1 = (Blob)rs.getObject(1);
//System.out.println("blob length "+blob1);//rs.getString(1);
Files.copy(inputStream, Paths.get("C:/photo-" + i + ".jpg"));
}
++i;
}
//ERROR System.out.println("bytessssssss "+inputStream);
} // Closes rs.
} // Closes psStmt.
}
1- In your code when setting the parameter's value of SQL query, be sure to use the appropriate data type of the field. So here you should use
psStmt.setString(1, "Nani");
instead of
psStmt.setLong(1, "Nani");
2- Make sure that the query is correct (Table name, field name).
3- Make sure that the table is containing data.
I'm trying to restore a MySQL database from a file generated with mysqldump.
I do it with an ArrayList that contains each query of the restoration plan and I execute each one of them with a Statement.
But sometimes, it stops on some point of the proccess (it can be different on different executions). It doesn't show any error message; it just hangs (when this happens, I need to restart the Mysql service).
This is the code of the restoration:
ArrayList<String> sql;
int res;
FileSQLCommandManager fichero = null;
try {
if (pass == null)
conectar(conn);
else
conectar(conn, pass);
Statement st = null;
st = conn.createStatement();
st.executeUpdate("SET FOREIGN_KEY_CHECKS=0");
PreparedStatement stConstraints = null;
String cadenaSQL = null;
String cadenaSQLConstraints = null;
String cadenaConstraints;
ResultSet rs;
boolean ejecutar = false;
fichero = new FileSQLCommandManager(fic);
fichero.open();
sql = fichero.read();
cadenaSQL = "";
for (int i = 0; i < sql.size(); i++) {
cadenaSQL = sql.get(i);
ejecutar = true;
if (ejecutar) {
st = null;
st = conn.createStatement();
res = st.executeUpdate(cadenaSQL);
if (res == Statement.EXECUTE_FAILED) {
System.out.println("HA FALLADO LA CONSULTA " + cadenaSQL);
}
}
}
st.executeUpdate("SET FOREIGN_KEY_CHECKS=1");
st.close();
fichero.close();
commit();
desconectar();
fichero = null;
return true;
} catch (Exception ex) {
ex.printStackTrace();
rollback();
desconectar();
return false;
}
}
FileSQLCommandManager is a class that fills the ArrayList. This works, the ArrayList content is all right. It stops on executeUpdate of any query (not always, sometimes it works without problems WITH THE SAME SQL FILE).
First I disable the foreign key checks because it can drop a table with a reference (the order of recreation of tables is set by the SQL dump).
Any hint?
Thank's; I'm getting mad with this :(
Why are you going through all that work, when a simple mysql < db_backup.dump will restore the whole thing for you?
this really work for restore
String comando = "C:\\MySQL\\bin\\mysql.exe --host=localhost --port=3306 --user=root --password=123 < D:\\back.sql";
File f = new File("restore.bat");
FileOutputStream fos = new FileOutputStream(f);
fos.write(comando.getBytes());
fos.close();
Process run = Runtime.getRuntime().exec("cmd /C start restore.bat ");
and for backup
DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd", Locale.forLanguageTag("ru"));
java.util.Date currentDate = new java.util.Date();
Process p = null;
try {
Runtime runtime = Runtime.getRuntime();
p = runtime.exec("C:\\MySQL\\bin\\mysqldump.exe --default-character-set=utf8 -uroot -p123 -c -B shch2 -r " + "D:/" + dateFormat.format(currentDate) + "_backup" + ".sql");
//change the dbpass and dbname with your dbpass and dbname
int processComplete = p.waitFor();
if (processComplete == 0) {
System.out.println("Backup created successfully!");
} else {
System.out.println("Could not create the backup");
}
} catch (Exception e) {
e.printStackTrace();
}
but you need to convey the exact path to mysql.exe and mysqldump.exe