Empty table in database using Hibernate - java

I am trying to make a pojo xml file for a pojo class and then create a Hibernate session but the xml file doesn't get updated, the program executes without any error but the table is empty and the data goes inside the previously created table.
What i concluded from this is that my xml file doesn't get updated as soon as create Hibernate session instead it takes the previous data but when the xml is opened new data comes up.
Is there any way of refreshing the xml file so that the new xml data is used in Hibernate?

Related

How to replace whole SQL table data frequently?

I have a Spring application that runs a cron on it. The cron every few minutes gets new data from external API. The data should be stored in a database (MySQL), in place of old data (Old data should be overwritten by new data). The data requires to be overwritten instead of updated. The application itself provides REST API so the client is able to get the data from the database. So there should not be situation that client sees an empty or just a part of data from database because there is an data update.
Currently I've tried deleting whole old data and insert new data but there is a place that a client gets just a part of the data. I've tried it via Spring Data deleteAll and saveAll methods.
#Override
#Transactional
public List<Country> overrideAll(#NonNull Iterable<Country> countries) {
removeAllAndFlush();
List<CountryEntity> countriesToCreate = stream(countries.spliterator(), false)
.map(CountryEntity::from)
.collect(toList());
List<CountryEntity> createdCountries = repository.saveAll(countriesToCreate);
return createdCountries.stream()
.map(CountryEntity::toCountry)
.collect(toList());
}
private void removeAllAndFlush() {
repository.deleteAll();
repository.flush();
}
I also thought about having a temporary table that gets new data and when the data is complete just replace main table with temporary table. Is it a good idea? Any other ideas?
It's a good idea. You can minimize the downtime by working on another table until it's ready and then switch tables quickly by renaming. This will also improve perceived performance by the users because no record needs to be locked like what happens when using UPDATE/DELETE.
In MySQL, you can use RENAME TABLE if you don't have triggers on the table. It allows multiple table renaming at once and it works atomically (i.e. transaction - if any error happens, no change is made). You can use the following for example
RENAME TABLE countries TO countries_old, countries_new TO countries;
DROP TABLE countries_old;
Refer here for more details
https://dev.mysql.com/doc/refman/5.7/en/rename-table.html

How can I list database tables in a set of databases using the new DBCPConnectionPoolLookup in NiFi?

As of NiFi 1.7.1, the new DBCPConnectionPoolLookup enables dynamic selection of database connections: set an attribute database.name on a FlowFile and when a consuming processor accesses a configured DBCPConnectionPoolLookup controller service, the content of that attribute will be used to get a connection through this lookup's configured properties, which contain a mapping of potential values to DBCPConnectionPool controller service.
I'd like to list the tables in each database that I've configured in the lookup, but the ListDatabaseTables processor does not accept incoming FlowFiles. This seems to mean that it's not usable for listing tables in a dynamic set of databases.
What is the best way to accomplish this?
ListDatabaseTables uses the JDBC API for getting table info from the metadata of an established JDBC connection. This hides the underlying method of how to actually get tables from a particular database.
If all your databases are of the same ilk, then if you have a list of databases, you could generate flow files with one per database, filling in the database.name attribute, then using ExecuteSQL with the DBCPConnectionPoolLookup to execute the corresponding SQL statement to get the tables for that database, such as SHOW TABLES. You can parse the records using any of the record-aware processors such as QueryRecord, UpdateRecord, ConvertRecord, etc. and if you need one table per flow file you can use SplitRecord. If the output is JSON or CSV or XML, you could use EvaluateJsonPath, ExtractText, or EvaluateXPath respectively to get the table name into an attribute, and continue on from there.
I wrote up NIFI-5519 to cover the proposal for ListDatabaseTables to optionally accept incoming connections, in the meantime you'd need 1 ListDatabaseTables instance to correspond to each of your DBCPConnectionPool instances.

Save xml column data in physical path using java

I have an sql server database table which has xml column name called "Bodytext" which will store xml data.
Now I need to get this "Bodytext" column data and save into System physical path as xml file(Ex: test.xml etc.,)
Any suggestion how to implement this using JAVA?
The column type would be BLOB or CLOB. Retrieve the column data and write the byte data to your file and make it abc.xml

Create single xml file from multiple xmls

My requirement is as follows:
I would like to use a JDBC to connect the Oracle which has xml as a table column and retrieve the records. I want all those retrieved xml records to be in one file.
I will parse those extracted xmls using JAXB API and delete some of the tags for every xml i read . Then i would like to generate one single output file which consist of all the xmls that are edited
After retrieving the data from the table my output should look like this
Extracted_records.txt
<Record><tag1></tag1><tag2></tag2><tag3></tag3><Record>
<Record><tag1></tag1><tag2></tag2><tag3></tag3><Record>
<Record><tag1></tag1><tag2></tag2><tag3></tag3><Record>
<Record><tag1></tag1><tag2></tag2><tag3></tag3><Record>
After parsing and deleting the records my output file should look like this
Outputfile.txt
<Record><tag2></tag2><tag3></tag3><Record>
<Record><tag1></tag1><tag2></tag2><Record>
<Record><tag1></tag1><Record>
<Record><tag1></tag1><tag3></tag3><Record>

Orient DB - Document and Cluster Mapping

I am using Orient DB Document model. My code to save a document-
private ODocument saveDocument(ODocument document) {
ODatabaseRecordThreadLocal.INSTANCE.set(database);
return document.save();
}
We create classes from some types and some Document classes are created at runtime hence schemeless.
The save code works fine when the ODocument is of a class that has been defined in a schema. Example we have a Status schema-
schema.createClass("Status");
So if i do
document = new ODocument("Status");
save(document)
then the above code works fine.
But if i do
doument = new ODocument("RawData");
save(document)
then i get OSchemaException -
Record saved into cluster collectionfile should be saved with class CollectionFile but saved with class RawData
Where CollectionFile is some other Schema that i have in my database. My question is that why is Orient trying to save RawData document in some other cluster.
P.S : This code was working fine one day back when i had single DB in my application. Then i changed to a multi DB approach where i have two DB instances in my application.
Thanks for the help.
You should set the current db you want to use in case of multiple dbs, with:
ODatabaseRecordThreadLocal.INSTANCE.set( database2 );
Look at: http://www.orientechnologies.com/docs/last/Java-Multi-Threading.html

Categories

Resources