Extracting data from sqlback formatted file for conversion to CSV - java

I have an sqback file respresenting an sqlite db file. I want to extract the data from this sqback file, ie, table names and contents, and convert it into csv file format. I want to do this using Java.
** The sqback file will have already been uploaded from android device to pc prior to processing. So I need a solution that is appropriate for taking place server side.
Does anyone have any leads on how to perform such a task?

If using Android you can take advantage of the built SQLiteDatabase and SQLiteOpenHelper. You'll find all the info you need here.
After parsing everything you can export to CSV the way you want by using File.
EDIT: So basically what you need to do is to parse the bytes by reading them and that way have access to the content. In some cases you don't even need to convert them to a String, since could be that you only need the value of the byte. (ie.: Offset: 44 Length:4 Description:The schema format number. Supported schema formats are 1, 2, 3, and 4.).
You can always check if your values are correct with any HEX editor, even opening the sqlite file with a text editor of any kind would help.
Let's start from scratch. First, reading the file. Two approaches
a. Read the whole file and parse it after
b. Read and parse the whole file in blocks (recommended, specially for bigger files)
Both approaches would share most of the following code:
File file = new File("YOUR FILE ROUTE");
int len = 1024 * 512; //512KB
try {
fis = new FileInputStream(file);
} catch (FileNotFoundException e1) {
fis = null;
e1.printStackTrace();
}
byte[] b = new byte[len];
int bread;
try {
while((bread = fis.read(b, 0, len))!=-1)
{
if(parseBlock(b, bread)==1)
return;
}
fis.close();
} catch (IOException e) {
e.printStackTrace();
}
The difference would be between getting partial blocks and parsing them on the fly (which I guess works for you) and getting the whole thing would be to just put:
fis.read(b, 0, fis.available);
Instead of the while loop.
Ultimately your approach is right, and that's the way to get bytes into a String. (new String(b)). Moreover the first characters are likely to represent weird symbols, if you have a look to the file format of SQL, these are reserved bytes for storing some metadata.
Open the sqlite file with any text editor and check that what you see there matches with what comes out of your code.

This website indicates which extra libraries to make use of, as well as provides examples of how to interact with the sqlite files (http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC#Usage)
important things to note:
1) make sure to include load the sqlite-JDBC driver using the current class loader. This is done with the line
2) the sqlite file IS a db, even if its not sitting on a server somewhere. So you still must create a connection to the file to interact with it. And you must open the connection and close the connection as well.
Connection connection = null;
connection = DriverManager.getConnection("jdbc:sqlite:" + path); // path is a String to the sqlite file (sqlite or sqback)
connection.close(); // after you are done with the file
3) Information can be extracted by using sql code to query the file. This returns a processable object of type ResultSet that holds your data pertaining to the query
Statement statement = connection.createStatement();
statement.setQueryTimeout(30);
ResultSet rs = statement.executeQuery("SELECT * FROM " + tblName);
4) from the ResultsSet you can grab data using the get commands with either the column index or the column header key
rs.getString("qo")
Hope that helps anyone having the same issue as I was having

Related

How to store image from Android app to MySQL

I store the images to SQLite by converting the bitmap to byte array.
Should I do the same thing? Getting the byte array from bitmap, then to JSON, then to PHP, and finally to MySQL.
If yes, how can I do that? I could store strings to MySQL from the app, but couldn't do it on byte arrays.
just convert the byte array into Base64 ... base64 technically is a string, so JSON it to your webservice, there convert it back from Base64 ... and the rest is history
here is a link for converting image to Base64 in android : How to convert a image into Base64 string?
btw, other guys are right it's not so efficient to store the image itself inthe db, storing a reference would be much better ... unless you do not want your images to be right on the sd card, which is something else, security wise !
You can refer to this SO discussion that talks about uploading files to a server using Android.
As a matter of fact it is not recommended to store binary data into relational databases. Refer this SO Discussion. Rather a recommended way would be store the binary data on a server disk location as a file and simply place the path of the file within the database. This would prevent any corruption of data due to discrepancies in the database character set and encodings.
As you are willing to use another solution that is more good, i am explaining you the below procedure.
Instead of storing the image in the database, create a directory specifically for the images, after you successfully upload the image store the path of that image in your database. After that use that path to refer that image.
You can upload image via POST method, and then store reference in the database (ex:- images/img1.bmp)
Whenever you needed you can get file reference using http request(you need to code php for that request handling)
You can access image by using your servers public ip or domain for example : mydomain.com/app1/images/img1.bmp
This is just a one way to do it, so if you think about implementing look for file upload examples via POST
Hope this helps
Use Multipart file upload to send file to your web service (Use libraries like Android Asynchronous Http Client to make the job easy).Then you can encode the file in to Base64 and store in the database as text ,but it is not a best practice to store image files in database, you should save the image as file in the server and keep the path in MySql.
public static String imgToBase64(Bitmap bitmap) {
ByteArrayOutputStream out = null;
try {
out = new ByteArrayOutputStream();
// compress image
bitmap.compress(Bitmap.CompressFormat.JPEG, 20, out);
out.flush();
out.close();
byte[] imgBytes = out.toByteArray();
return Base64.encodeToString(imgBytes, Base64.DEFAULT);
} catch (Exception e) {
return null;
} finally {
try {
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}

Blob information modified when retrieving image from DB using plain JDBC

I am working on a small POC (which will be integrated into a bigger application) which consists of
Problem Context
reading an image in a simple java application program,
Convert image into a byte array byte[] imageEncodedBytes = baos.toByteArray()
storing it into a remote DB2 database (technology used was not imp, so I am using plain jdbc for now)
Reading it back from the DB and
converting it back to ensure that the re-creation of the image works. ( I am able to open the new re-created image in any image viewer)
Issues
The issues occur at step 5.
I read the image using a select query into a Result Set.
use the rs.getBlob("ColumnName") to get the blob value.
Fetch the byte array from the blob value using byte[] decodedArray = myBlob.getBytes(1, (int)myBlob.length())
Create the image from the obtained byte array.
At Step 3 the byte array decodedArray obtained from the blob differs from the byte array 'imageEncodedBytes' that I get when I read the image.
As a consequence, the following code to create the image from the byte array decodedArray fails.
ByteArrayInputStream bais = new ByteArrayInputStream(decodedArray);
//Writing to image
BufferedImage imag=ImageIO.read(bais); // Line of failure. No registered provider able to read bais
ImageIO.write(imag, "jpg", new File(dirName,"snap.jpg"));
References and Other data for issue investigation
I have referred the following links for verification
1. Inserting image in DB2
2. This Link here offers insight, but yet I was not able to determine, how to register the ImageReader.
4. When inserting the image to DB2 I am using - the following query
Statement st = conn.createStatement();
st.executeUpdate("INSERT INTO PHOTO (ID,PHOTO_NM,PHOTO_IM, THMBNL_IM) " + "VALUES (1,'blob("+bl+")',blob('"+bl+"')")
As an alternative to fetching blob value from the result set I have also used binaryStream = rs.getBinaryStream("PHOTO_IM") to get the binary stream and then get byte array from the binary stream. even in this case, the decodedArray is different from imageEncodedBytes
Please assist, I may be missing something extremely trivial here, but I am not able to figure out what. Any help/pointers will be greatly helpful. Thanks in advance.
The resolution is sort of a workaround that I have worked on.
I have used the jdbcTemplates to resolve the issue. The lobHandler Object of the jdbc template provides an easy way to manage the blobs.
Steps to resolve using the Spring Lob Handler
) Created a Data Source
) configured the Jdbc template to use the data source
) use the lobHandler code to execute the insert query
Code below
jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.execute("INSERT INTO PHOTO (PHOTO_IM) VALUES (?)",
new AbstractLobCreatingPreparedStatementCallback(lobHandler) {
protected void setValues(PreparedStatement ps, LobCreator lobCreator) {
try {
lobCreator.setBlobAsBinaryStream(ps, 1, bean.getImageOrig(), bean.getImageLength());
} catch (java.sql.SQLException e) {
e.printStackTrace();
}
}
}
);
references
1. ) Stack Overflow Link - Spring JDBC Template Insert Blob
2. ) Stack Overflow Link - Close InputStream
3. ) Spring Doc for LobHandler

writeDelimitedTo/parseDelimitedFrom seem to be losing data

I am trying to use protocol buffer to record a little market data. Each time I get a quote notification from the market, I take this quote and convert it into a protocol buffers object. Then I call "writeDelimitedTo"
Example of my recorder:
try {
writeLock.lock();
LimitOrder serializableQuote = ...
LimitOrderTransport gpbQuoteRaw = serializableQuote.serialize();
LimitOrderTransport gpbQuote = LimitOrderTransport.newBuilder(gpbQuoteRaw).build();
gpbQuote.writeDelimitedTo(fileStream);
csvWriter1.println(gpbQuote.getIdNumber() + DELIMITER+ gpbQuote.getSymbol() + ...);
} finally {
writeLock.unlock();
}
The reason for the locking is because quotes coming from different markets are handled by different threads, so I was trying to simplify and "serialize" the logging to the file.
Code that Reads the resulting file:
FileInputStream stream = new FileInputStream(pathToFile);
PrintWriter writer = new PrintWriter("quoteStream6-compare.csv", "UTF-8");
while(LimitOrderTransport.newBuilder().mergeDelimitedFrom(stream)) {
LimitOrderTransport gpbQuote= LimitOrderTransport.parseDelimitedFrom(stream);
csvWriter2.println(gpbQuote.getIdNumber()+DELIMITER+ gpbQuote.getSymbol() ...);
}
When I run the recorder, I get a binary file that seems to grow in size. When I use my reader to read from the file I also appear to get a large number of quotes. They are all different and appear correct.
Here's the issue: Many of the quotes appear to be "missing" - Not present when my reader reads from the file.
I tried an experiment with csvWriter1 and csvWriter2. In my writer, I write out a csv file then in my reader I write a second cvs file using the my protobufs file as a source.
The theory is that they should match up. They don't match up. The original csv file contains many more quotes in it than the csv that I generate by reading my protobufs recorded data.
What gives? Am I not using writeDelimitedTo/parseDelimitedFrom correctly?
Thanks!
Your problem is here:
while(LimitOrderTransport.newBuilder().mergeDelimitedFrom(stream)) {
LimitOrderTransport gpbQuote= LimitOrderTransport.parseDelimitedFrom(stream);
The first line constructs a new LimitOrderTransport.Builder and uses it to parse a message from the stream. Then that builder is discarded.
The second line parses a new message from the same stream, into a new builder.
So you are discarding every other message.
Do this instead:
while (true) {
LimitOrderTransport gpbQuote = LimitOrderTransport.parseDelimitedFrom(stream);
if (gpbQuote == null) break; // EOF

Appending data to properties file, comments disappear & order of data changed [duplicate]

This question already has answers here:
A better class to update property files?
(8 answers)
Closed 5 years ago.
While appending data to properties file, existing comments disappear & order of data is being changed. Please suggest how to avoid it?
data in properties file(before appending the data) along with comments is as follows:
# Setting the following parameters
# Set URL to test the scripts against
App.URL = https://www.gmail.com
# Enter username and password values for the above Test URL
App.Username = XXXX
App.Password = XXXX
I am adding more data to above properties file as follows:
public void WritePropertiesFile(String key, String data) throws Exception
{
try
{
loadProperties();
configProperty.setProperty(key, data);
File file = new File("D:\\Helper.properties");
FileOutputStream fileOut = new FileOutputStream(file);
configProperty.store(fileOut, null);
fileOut.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
calling the above functiona as:
help.WritePropertiesFile("appwrite1","write1");
help.WritePropertiesFile("appwrite2","write2");
help.WritePropertiesFile("appwrite3","write3");
Data is added successfully, however the previously entered comments are disappeared and order of the data is also changed and the properties file(after appending data) is displayed as follows
#Tue Jul 02 11:04:29 IST 2013
App.Password=XXXX
App.URL=https\://www.gmail.com
appwrite3=write3
appwrite2=write2
appwrite1=write1
App.Username=XXXX
I want the data to append at the last ,doesn't want to change the order and doesn't want to remove the previously entered comments. Please let me know if it is possible to implement my requirement?
I recently came to same issue and found the following answer here on StackOverflow: https://stackoverflow.com/a/565996/1990089. It recommends using of Apache Commons Configuration API to deal with properties files, which allows to preserve comments and whitespace. Didn't try this yet myself however.
It is not straight forward to preserve the comments of properties file. There are no methods on java.util.Properties to handle comments. Comments are simply ignored when reading the file. As only the key value pairs are loaded when we do properties.load and hence when you save it back the comments are lost. Check the link below, there is one solution to achieve what you need but not the elegant way:
http://www.dreamincode.net/forums/topic/53734-java-code-to-modify-properties-file-and-preserve-comments/
If you don't want to delete your content from the property file. Just read and replace the string from the file.
String file="D:\\path of your file\abc.properties";
Path path = Paths.get(file);
Charset charset = StandardCharsets.UTF_8;
String content = new String(Files.readAllBytes(path), charset);
content = content.replaceAll("name=anything", "name=anything1");
Files.write(path, content.getBytes(charset));
The above code will not delete content from your file. It just replace the part of content from the file.

Failing for Larger Input Files Only: FileServiceFactory getBlobKey throws IllegalArgumentException

I have a Google App Engine App that converts CSV to XML files. It works fine for small XML inputs, but refuses to finalize the file for larger inputed XML. The XML is read from, and the resulting csv files are written to, many times before finalization, over a long-running (multi-day duration) task. My problem is different than this FileServiceFactory getBlobKey throws IllegalArgumentException , since my code works fine both in production and development with small input files. So it's not that I'm neglecting to write to the file before closing/finalizing. However, when I attempt to read from a larger XML file. The input XML file is ~150MB, and the resulting set of 5 CSV files is each much smaller (perhaps 10MB each). I persisted the file urls for the new csv files, and even tried to close them with some static code, but I just reproduce the same error, which is
java.lang.IllegalArgumentException: creation_handle: String properties must be 500 characters or less. Instead, use com.google.appengine.api.datastore.Text, which can store strings of any length.
at com.google.appengine.api.datastore.DataTypeUtils.checkSupportedSingleValue(DataTypeUtils.java:242)
at com.google.appengine.api.datastore.DataTypeUtils.checkSupportedValue(DataTypeUtils.java:207)
at com.google.appengine.api.datastore.DataTypeUtils.checkSupportedValue(DataTypeUtils.java:173)
at com.google.appengine.api.datastore.Query$FilterPredicate.<init>(Query.java:900)
at com.google.appengine.api.datastore.Query$FilterOperator.of(Query.java:75)
at com.google.appengine.api.datastore.Query.addFilter(Query.java:351)
at com.google.appengine.api.files.FileServiceImpl.getBlobKey(FileServiceImpl.java:329)
But I know that it's not a String/Text data type issue, since I am already using similar length file service urls for the previous successful attempts with smaller files. It also wasn't an issue for the other stackoverflow post I linked above. I also tried putting one last meaningless write before finalizing, just in case it would help as it did for the other post, but it made no difference. So there's really no way for me to debug this... Here is my file closing code that is not working. It's pretty similar to the Google how-to example at http://developers.google.com/appengine/docs/java/blobstore/overview#Writing_Files_to_the_Blobstore .
log.info("closing out file 1");
try {
//locked set to true
FileWriteChannel fwc1 = fileService.openWriteChannel(csvFile1, true);
fwc1.closeFinally();
} catch (IOException ioe) {ioe.printStackTrace();}
// You can't get the blob key until the file is finalized
BlobKey blobKeyCSV1 = fileService.getBlobKey(csvFile1);
log.info("csv blob storage key is:" + blobKeyCSV1.getKeyString());
csvUrls[i-1] = blobKeyCSV1.getKeyString();
break;
At this point, I just want to finalize my new blob files for which I have the urls, but cannot. How can I get around this issue, and also, what may be the cause? Again, my code works for small files (~60 kB), but the input file of ~150MB fails). Thank you for any advice on what is causing this or how to get around it! Also, how long will my unfinalized files stick around for, before being deleted?
This issue was a bug in the Java MapReduce and Files API, which was recently fixed by Google. Read announcement here: groups.google.com/forum/#!topic/google-appengine/NmjYYLuSizo

Categories

Resources