Merge multiple SQL-files on-device Android using Java - java

In my Android application I've created multiple .db files with each containing multiple tables.
Is there a way to copy all tables to one central file? Is this possible without having to write a for loop for each individual row, table and file? Aka is there a simple way of doing this?
As the data is stored on-device it's not possible to use any server-side mechanisms.

You can use ATTACH DATABASE instruction in SQLite. Here is documentation
When you attach the database, use something like this INSERT INTO DestinationTable SELECT * FROM attachedDatabase.SourceTable
fun migration(context: Context) {
val destinationDb = DatabaseHelper2(context)
val sourceDbPath = context.getDatabasePath("database1.sqlite")
val writableDb = destinationDb.writableDatabase
writableDb.execSQL("ATTACH DATABASE '${sourceDbPath.absolutePath}' AS attached")
writableDb.execSQL("INSERT INTO DestinationTable SELECT * FROM attached.SourceTable")
}

Related

Google Cloud DLP Api InspectResult

Good day!
I'm using cloud dlp api to inspect bigquery views by converting chunks of the data into ContentItem and passing it to the inspect request. However, I am having trouble converting the findings and saving it to a bigquery table. Before, I used an airflow DLP operator for this and it is being done automatically by passing output storage config in an InspectConfig. However, that approach won't be applicable anymore as I'm calling the DLP api per chunks of data using apache beam in java.
I saw that the finding object has a writeTo() method but I'm not sure how to use it and how to save the findings with correct types into a bigquery table. can you help me with this? I'm currently stuck. thank you!
what I want to do is something like this
for (Finding res : result.getFindingsList()){
TableRow bqRow = new TableRow();
Object data = res.getLocation();
bqRow.set("field", data);
context.output(bqRow);
}
but this approach wouldn't save it in bigquery with correct types, especially for getLocation as it returns something like a protobuf message type.
I was trying to see if I can use the writeTo() method but I'm not sure how to use it. Thank you in advance for the help!
for (Finding res : result.getFindingsList()){
res.writeTo(...)
...
context.output(...);
}
If you use HybridInspect we'll store the findings for you to BigQuery.
https://cloud.google.com/dlp/docs/how-to-hybrid-jobs
If you do it yourself you will need to convert to a native BQ format like json
Load protobuf data to bigquery

Search in one GWT ComboBox affects on other ComboBox with the same store

I'm using com.extjs.gxt.ui.client.widget.form.ComboBox GWT version 2.8.2 and gxt 2.3.1a .
I have 2 comboboxes that have the same storage with many values. When I do search in one combobox I see in debug ComboBox lastquery value same as in search string and only single found value in the CombobBox store. But same value I see in other CombobBox which use same storage. So when I'm trying to work with other comboboxes connected (by listener) to second combobox I get an error because I can't load the required values.
com.google.gwt.core.client.JavaScriptException: (TypeError) : Cannot read properties of null (reading 'getValue__Ljava_lang_Object_2')
at Unknown.TypeError: Cannot read properties of null(Unknown#-1)
Maybe some option exists to allow to use of the same storage in several ComboBoxes without affecting each other, i.e. not limit storage with filtered by search string values? Using 2 storages by creating 2 copies is not preferable. Thanks in advance.
UPD: combobox2.getValue() gives null when search was performed earlier. How I can reload storage in this case?
UPD: I created 2 comboboxes with the same storage:
private final ListStore<ComboBoxItemModel<String>> countryStore;
public ComboBox<ComboBoxItemModel<String>> createCountryCombobox() {
ComboBox<ComboBoxItemModel<String>> countriesComboBox = new ComboBox<>();
countriesComboBox.setFieldLabel("Country");
countriesComboBox.setStore(countryStore);
countriesComboBox.setAllowBlank(false);
countriesComboBox.setTriggerAction(ComboBox.TriggerAction.ALL);
countriesComboBox.setDisplayField(ComboBoxItemModel.COMBO_BOX_ITEM_NAME);
countriesComboBox.setValue(countriesComboBox.getStore().getAt(0));
countriesComboBox.setForceSelection(true);
countriesComboBox.setEditable(true);
return countriesComboBox;
}
setEditable(true) option allows to search countries in the list, but it delete values in the storage as result. So I can't load values for other fields connected to second combobox without manual selection in combobox2.
Seems using two storage is quite a simple solution. Other solutions that I tried look like a workaround. I just create copies and it works fine:
ListStore<ComboBoxItemModel<String>> store = new ListStore<>();
store.add(countryStore.getModels());
countriesComboBox.setStore(store);

How to pre-populate data to table?

Recently I have found the article about Room db and there was first tip for pre-populating the data to database.
Is there currently some elegant way to pre-populate the data when the database is created?
I am using Dagger2, so the actual creation of the database is done quite easy.
#Module
class DatabaseModule{
#Provides
#Singleton
fun provideObjectBox(context: Context): BoxStore =
MyObjectBox.builder()
.androidContext(context)
.build()
}
And the way I am doing it now with the SharedPreferences. So I am just checking if it is the first set up of the database and than populating the database.
Also i guess when the question was created it was not possible there is a function added to the builder so you can simply call:
MyObjectBox.builder()
.initialDbFile(file)
.androidContext(context)
.build()
This will use the given file must contain all data in mdb format. I am using this feature for backuping user data so i dont have to create the file on my own.
As far as i know there is no easy posibility to create this file instead of creating objects and putting them in the boxStore.
I am copying the file with already existing data like this (not the pretty way though)
val dbFile = File(File(File(activity.getFilesDir(), "objectbox"), BoxStoreBuilder.DEFAULT_NAME), "data.mdb")
Just found the main contributer answered the same: https://stackoverflow.com/a/51765399/8524651

Save a spark RDD using mapPartition with iterator

I have some intermediate data that I need to be stored in HDFS and local as well. I'm using Spark 1.6. In HDFS as intermediate form I'm getting data in /output/testDummy/part-00000 and /output/testDummy/part-00001. I want to save these partitions in local using Java/Scala so that I could save them as /users/home/indexes/index.nt(by merging both in local) or /users/home/indexes/index-0000.nt and /home/indexes/index-0001.nt separately.
Here is my code:
Note: testDummy is same as test, output is with two partitions. I want to store them separately or combined but local with index.nt file. I prefer to store separately in two data-nodes. I'm using cluster and submit spark job on YARN. I also added some comments, how many times and what data I'm getting. How could I do? Any help is appreciated.
val testDummy = outputFlatMapTuples.coalesce(Constants.INITIAL_PARTITIONS).saveAsTextFile(outputFilePathForHDFS+"/testDummy")
println("testDummy done") //1 time print
def savesData(iterator: Iterator[(String)]): Iterator[(String)] = {
println("Inside savesData") // now 4 times when coalesce(Constants.INITIAL_PARTITIONS)=2
println("iter size"+iterator.size) // 2 735 2 735 values
val filenamesWithExtension = outputPath + "/index.nt"
println("filenamesWithExtension "+filenamesWithExtension.length) //4 times
var list = List[(String)]()
val fileWritter = new FileWriter(filenamesWithExtension,true)
val bufferWritter = new BufferedWriter(fileWritter)
while (iterator.hasNext){ //iterator.hasNext is false
println("inside iterator") //0 times
val dat = iterator.next()
println("datadata "+iterator.next())
bufferWritter.write(dat + "\n")
bufferWritter.flush()
println("index files written")
val dataElements = dat.split(" ")
println("dataElements") //0
list = list.::(dataElements(0))
list = list.::(dataElements(1))
list = list.::(dataElements(2))
}
bufferWritter.close() //closing
println("savesData method end") //4 times when coal=2
list.iterator
}
println("before saving data into local") //1
val test = outputFlatMapTuples.coalesce(Constants.INITIAL_PARTITIONS).mapPartitions(savesData)
println("testRDD partitions "+test.getNumPartitions) //2
println("testRDD size "+test.collect().length) //0
println("after saving data into local") //1
PS: I followed, this and this but not exactly same what I'm looking for, I did somehow but not getting anything in index.nt
A couple of things:
Never call Iterator.size if you plan to use data later. Iterators are TraversableOnce. The only way to compute Iterator size is to traverse all its element and after that there is no more data to be read.
Don't use transformations like mapPartitions for side effects. If you want to perform some type of IO use actions like foreach / foreachPartition. It is a bad practice and doesn't guarantee that given piece of code will be executed only once.
Local path inside action or transformations is a local path of particular worker. If you want to write directly on the client machine you should fetch data first with collect or toLocalIterator. It could be better though to write to distributed storage and fetch data later.
Java 7 provides means to watch directories.
https://docs.oracle.com/javase/tutorial/essential/io/notification.html
The idea is to create a watch service, register it with the directory of interest (mention the events of your interest, like file creation, deletion, etc.,), do watch, you will be notified of any events like creation, deletion, etc., you can take whatever action you want then.
You will have to depend on Java hdfs api heavily wherever applicable.
Run the program in background since it waits for events forever. (You can write logic to quit after you do whatever you want)
On the other hand, shell scripting will also help.
Be aware of coherency model of hdfs file system while reading files.
Hope this helps with some idea.

how to copy a schema in mysql using java

in my application i need to copy a schema with its tables and store procedures from a base schemn to a new schema.
i am looking for a way to implement this.
i looked into exacting the mysqldump using cmd however it is not a good solution because i have a client side application and this requires an instillation of the server on the client side.
the other option is my own implantation using show query.
the problem here is that i need t implement it all from scratch and the must problematic part is that i will need to arrange the order of the tables according to there foreign key (because if there is a foreign key in the table, the table i am pointing to needs to be created first).
i also thought of creating a store procedure to do this but store procedures in my SQL cant access the disk.
perhaps someone has an idea on how this can be implemented in another way?
You can try using the Apache ddlutils. There is a way to export the ddls from a database to an xml file and re-import it back.
The api usage page has examples on how to export schema to an xml file, read from xml file and apply it to a new database. I have reproduced those functions below along with a small snippet on how to use it to accomplish what you are asking for. You can use this as starting point and optimize it further.
DataSource sourceDb;
DataSource targetDb;
writeDatabaseToXML(readDatabase(sourceDb), "database-dump.xml");
changeDatabase(targetDb,readDatabaseFromXML("database-dump.xml"));
public Database readDatabase(DataSource dataSource)
{
Platform platform = PlatformFactory.createNewPlatformInstance(dataSource);
return platform.readModelFromDatabase("model");
}
public void writeDatabaseToXML(Database db, String fileName)
{
new DatabaseIO().write(db, fileName);
}
public Database readDatabaseFromXML(String fileName)
{
return new DatabaseIO().read(fileName);
}
public void changeDatabase(DataSource dataSource,
Database targetModel)
{
Platform platform = PlatformFactory.createNewPlatformInstance(dataSource);
platform.createTables(targetModel, true, false);
}
You can use information_schema to fetch the foreign key information and build a dependency tree. Here is an example.
But I think you are trying to solve something that has been solved many times before. I'm not familiar with Java, but there are ORM tools (for Python at least) that can inspect your current database and create a complementing model in Java (or Python). Then you can deploy that model into another database.

Categories

Resources