I have using maria db to store dynamic columns.In sql Query it is there in the maria db documentation. https://mariadb.com/kb/en/library/dynamic-columns/ .
But i couldn't find a spring boot jpa implementation.I have tried jpa native query.I am storing a Json in this dynamic column which is data type of Blob in maria db.But it is very tough because i couldn't find a way to store when json is nesting another objects or arrays.Is there any way to accomplish this task
JSON is just a string. It could be stored in a TEXT or BLOB column.
The issue is that it is a structured string and it is tempting to reach into it to inspect/modify fields. Don't. It won't be efficient.
Copy values out of the JSON that you need for WHERE or ORDER BY (etc) so that MySQL/MariaDB can efficiently access them. Sure, keep copies inside the JSON text/blob, if you like.
Please describe your use for JSON, there may be more tips on "proper" usage inside a database.
That being said, MariaDB 10.2 has picked up much of Oracle's JSON. But there is not a JSON datatype. And there are a few function differences.
And this does not explain how far behind the 3rd party software (spring-boot, etc) is.
Related
I am using MS SQL database server for data and I have Hibernate framework for db mapping. I need to generate specific data (select ... from...) to the xml but have no idea how. I tried searching online but nothing...
I have 2 ideas but you may advice more experienced approach to this problem.
Generate xml at db layer with usage of FOR XML within the select. -> here I dont know what would be the result from select in hibernate...? String? dont know.
Retrieve list from DB and then use java to convert this list into xml, for example using this> https://www.javatpoint.com/jaxb-marshalling-example
What do you suggest? Thanks
There are many different parameters that could be considered as a basis to decide about the best approach (e.g. Is it important to be database independent? Is there any xml schema provided as a basis to generate the output? How the output is going to be used/accessed?, etc.)
If the output you're going to generate is something like the plain dump of the data and it would be used in future to import data into the database (or similar usages), then maybe just generating the output in database layer would be enough. But most of the time this is not the case. I highly recommend to generate the output in the application layer so that you would have a better control over the way it's going to be produced and customized later. You can fetch data using normal hibernate query and use (as you mentioned) JAX-B to serialize it into XML.
I wouldn't generate the XML at the database level. Since you're already using Hibernate, you can turn your entity object into a data transfer object (DTO), which is basically the same object, but this time intended to be marshalled and unmarshalled by library. Rather than JAXB, you may want to look at Jackson, which is a bit easer to use (Google for xml jackson), and if somebody ever decides that the output should be in JSON or YAML instead, it's very easy to change.
Been playing with MariaDB and really enjoy the NoSQL aspect of MariaDB in that we can add JSON into a column and do combined query for the Relational and the JSON part of a record. More info here: https://mariadb.com/kb/en/mariadb/dynamic-columns/
I'm not trying to hook MariaDB into my code.
Following this page https://springframework.guru/configuring-spring-boot-for-mariadb/ been able to connect to MariaDB from Java code, add and edit data like a normal relational database.
Thought I'm not sure how to use the NoSQL aspect of it from Java.
Namely adding a Blob column that will contain JSON - what Java Data type represent it. And how do querying.
I've been doing alot of research on web and couldn't find anything. Any assistance appreciated.
You will have to set the Query parameter to = "" ; Try it and post the results.
I'm using MongoDB and PostgreSQL in my application. The need of using MongoDB is we might have any number of new fields that would get inserted for which we'll store data in MongoDB.
We are storing our fixed field values in PostgreSQL and custom field values in MongoDB.
E.g.
**Employee Table (RDBMS):**
id Name Salary
1 Krish 40000
**Employee Collection (MongoDB):**
{
<some autogenerated id of mongodb>
instanceId: 1 (The id of SQL: MANUALLY ASSIGNED),
employeeCode: A001
}
We get the records from SQL, and from their ids, we fetch related records from MongoDB. Then map the result to get the values of new fields and send on UI.
Now I'm searching for some optimized solution to get the MongoDB results in PostgreSQL POJO / Model so I don't have to fetch the data manually from MongoDB by passing ids of SQL and then mapping them again.
Is there any way through which I can connect MongoDB with PostgreSQL through columns (Here Id of RDBMS and instanceId of MongoDB) so that with one fetch, I can get related Mongo result too. Any kind of return type is acceptable but I need all of them at one call.
I'm using Hibernate and Spring in my application.
Using Spring Data might be the best solution for your use case, since it supports both:
JPA
MongoDB
You can still get all data in one request but that doesn't mean you have to use a single DB call. You can have one service call which spans to twp database calls. Because the PostgreSQL row is probably the primary entity, I advise you to share the PostgreSQL primary key with MongoDB too.
There's no need to have separate IDs. This way you can simply fetch the SQL and the Mongo document by the same ID. Sharing the same ID can give you the advantage of processing those requests concurrently and merging the result prior to returning from the service call. So the service method duration will not take the sum of the two Repositories calls, being the max of these to calls.
Astonishingly, yes, you potentially can. There's a foreign data wrapper named mongo_fdw that allows PostgreSQL to query MongoDB. I haven't used it and have no opinion as to its performance, utility or quality.
I would be very surprised if you could effectively use this via Hibernate, unless you can convince Hibernate that the FDW mapped "tables" are just views. You might have more luck with EclipseLink and their "NoSQL" support if you want to do it at the Java level.
Separately, this sounds like a monstrosity of a design. There are many sane ways to do what you want within a decent RDBMS, without going for a hybrid database platform. There's a time and a place for hybrid, but I really doubt your situation justifies the complexity.
Just use PostgreSQL's json / jsonb support to support dynamic mappings. Or use traditional options like storing json as text fields, storing XML, or even EAV mapping. Don't build a rube goldberg machine.
I need to convert java objects being imported from the dB to the XML so that I could user it with Xstream in OptaPlanner. Is there any alternate way other than Hibernate to access the data from the dB. How to add more attributes for job scheduling.
optaplanner-core works based on POJO's (javabeans). It's oblivious to the fact that in optaplanner-examples, those POJO's are being read/written to XML files by XStream (and it doesn't care). Similarly, you can use any other technology to store those POJO's:
JPA (for example Hibernate-ORM, OpenJPA, ...) to store them into a database
JDBC to store them into a database. Note: JDBC works with SQL statements, so you 'll need to manually map SQL records to POJO's.
JAXB to store them in XML
XStream to store them in XML (as the examples do it)
Infinispan, mongodb, ... to store them in to a big data cloud. Note: might require manually mapping too, unless you use hibernate-ogm
...
OptaPlanner doesn't care, so it doesn't restrict you :)
I am trying to create an application in java which pulls out records from the database and maps it to objects. It does that without knowing what the schema of the database looks like. All i want to do is fetch all rows from all tables and store them somewhere. There could be a thousand tables with thousands of records each. The application doesn't know the name of any table or attribute. It should map "on the fly". I looked at hibernate but it doesnt give me what i want for this app. I don't want to create hard-coded xml files and classes for mapping. Any ideas how i can accomplish this ?
Thanks
Oracle has a bunch of data dictionary views for metadata.
ALL_TABLES, ALL_TAB_COLUMNS would be first places to start. Then you'd build ad-hoc queries based on what you get out of there. Not sure whether you have to deal with all data types (dates, blobs, spatial, user-defined....).
Not sure what you mean by "store them somewhere". If you start thinking CSV or XML files, you'll need to escape various characters from VARCHAR2 columns.
If you are looking for some generic extract/unload routines, you should look at what is already available in the database or open-source/commercially.
MyBatis provides a pretty simple way to map data results to objects and back, maybe check that out?
http://code.google.com/p/mybatis/
Not to be flip, but for this task, you might want to check out Ruby on Rails and its ActiveRecord approach