I am attempting to write a java application that will unzip an archive and store it in a database.
I would like to insert each file in the database after it has been extracted, does anyone have a good example of a java unzip procedure?
A little google-search would have helped you. Tutorial by Sun.
If you want to store the extracted data in a MySQL-Database you'll want to use a BLOB to do so. Tutorial might be found here.
Notice: BLOBs should not grow bigger then 1M because they'll be slower then a normal file-system. Here is the full article.
Related
Is it possible to upload jar as a file into database ? I need to upload jars into mongodb. I don't know how to do that. I know about file upload with Spring Boot.
I know it is possible to upload zip in database. But not finding information about JAR/WAR files.
JAR and WAR files are nothing more than a renamed ZIP file. If you want to see it yourself rename something.jar to something.zip and open it using archive manager.
Since you said you know how to upload a ZIP you should follow the same procedure. If the file is small (e.g. less than 4MB) perhaps using BSON is the best approach. See Storing Large Objects and Files in MongoDB.
If you mean saving a jar file into a database - it is depends on the database's support of BLOB data types.
And if you mean use Java language based stored procedures from JAR file - with Oracle and PostgreSQL this is possible. MongoDB supports server side JavaScript stored procedures only.
I have .dbb database dump file. I want to restore it and take it's backup in text file by developing application using java or c#. I don't know about .dbb extension file. I searched on internet but not found any satisafactory information about it. Please guide me how to deal with .dbb.
Thanks in advance.
I'm using Hadoop 2.5 Vanilla version, I need to store large data set of images into HDFS and Hive but i'm not getting how to do?
Can anyone help to fix this
thank you in advance
To store files into HDFS is easy, see the put documentation:
Usage: hdfs dfs -put <localsrc> ... <dst>
You can write scripts to put the image files in place.
There is another question that tells you how to do it with Hive: How to Store Binary Data in Hive?
I see some discussions online that suggest store images to hdfs and store metadata and link to the file in HBase is a better solution than store images directly to HBase.
See following links for reference:
http://apache-hbase.679495.n3.nabble.com/Storing-images-in-Hbase-td4036184.html
http://www.quora.com/Is-HBase-appropriate-for-indexed-blob-storage-in-HDFS
https://www.linkedin.com/groups/What-is-best-NoSQL-DB-3638279.S.5866843079608131586
i am trying to upload files to azure blob, i have referred this code for the same.
and i am able to successfully upload files too, my problem is..
using this code i have to upload files one by one and i am getting more than one files at a time so each time i need to iterate over the list and pass files one by one
what i want to do is to upload all files to azure blob in one go.
i tried searching on internet but unable to find any way :(
please help
Not relevant to Java, but you may check out the AzCopy tool which might be useful to you. It supports uploading blobs in parallel.
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/12/03/azcopy-uploading-downloading-files-for-windows-azure-blobs.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/azcopy-transfer-data-with-re-startable-mode-and-sas-token.aspx
I'm trying to use crawler4j to crawl websites. I was able to follow the instructions on the crawler4j website. When it is done it creates a folder with two different .lck files, one .jdb file and one .info.0 file.
I tried to read in the file using the code that I provided in this answer to read in the file but it keeps failing. I've used the same function to read text files before, so I know the code works.
I also found someone else that asked the same question a few months ago. They never got an answer.
Why can't I use my code to open and read these .lck files to memory?
Crawler4j uses BerkeleyDB to store crawl informations. See here in the source.
From the command line you can use DB utils to acces the data. Already covered in SO here.
If you want to access the data in your Java code, you simply import BerkeleyDB library (Maven instruction there) and follow the tutorial on how to open the DB.