We are converting our, java + MySql application to couchbase, for that we are using spring-data and a couchbase server.
I am confused how the Java objects(Entity / POJO) needs to be saved on to the couchbase bucket.
I read that, i can't create one bucket per Entity, So shall i put all data on to one bucket and add a _class property so that i can identify the data bjects ?
Is that the right way? Please share any links or suggestions about the same.
Spring data with couchbase, This is the link i was using.
if you create objects through Spring Data Couchbase, they will automatically have this _class property. It is use by Spring Data to convert the Json object from Couchbase to a POJO. Using a type field (or the _class field added automatically by Spring Data)is indeed a good practice as it allows you to filter easily when creating views or using N1QL. Which is also how you will find objects of different types in the same bucket.
Related
I have created a JPA # Query that has the primary source table, includes the inventory table multiple records via a join, and I also return person 1 and person 2 in the Response JASON Object return. What do I need to do on the Angular side to format the response and refer to the different data elements coming back on the form? For example - source.id source.desc and same page person.name, person.phone, inventory data. I can't find any good examples on how to achieve this correctly. Thank you in advance.
Using database entities on the frontend directly is a bad practice. Instead create DTOs (data transfer objects) to hold the data you need to get on the frontend, structured as you see fit for ease of handling. The mapping between entities and DTOs can be done (semi-)automatically, say, via MapStruct
I have using maria db to store dynamic columns.In sql Query it is there in the maria db documentation. https://mariadb.com/kb/en/library/dynamic-columns/ .
But i couldn't find a spring boot jpa implementation.I have tried jpa native query.I am storing a Json in this dynamic column which is data type of Blob in maria db.But it is very tough because i couldn't find a way to store when json is nesting another objects or arrays.Is there any way to accomplish this task
JSON is just a string. It could be stored in a TEXT or BLOB column.
The issue is that it is a structured string and it is tempting to reach into it to inspect/modify fields. Don't. It won't be efficient.
Copy values out of the JSON that you need for WHERE or ORDER BY (etc) so that MySQL/MariaDB can efficiently access them. Sure, keep copies inside the JSON text/blob, if you like.
Please describe your use for JSON, there may be more tips on "proper" usage inside a database.
That being said, MariaDB 10.2 has picked up much of Oracle's JSON. But there is not a JSON datatype. And there are a few function differences.
And this does not explain how far behind the 3rd party software (spring-boot, etc) is.
I'm using MongoDB and PostgreSQL in my application. The need of using MongoDB is we might have any number of new fields that would get inserted for which we'll store data in MongoDB.
We are storing our fixed field values in PostgreSQL and custom field values in MongoDB.
E.g.
**Employee Table (RDBMS):**
id Name Salary
1 Krish 40000
**Employee Collection (MongoDB):**
{
<some autogenerated id of mongodb>
instanceId: 1 (The id of SQL: MANUALLY ASSIGNED),
employeeCode: A001
}
We get the records from SQL, and from their ids, we fetch related records from MongoDB. Then map the result to get the values of new fields and send on UI.
Now I'm searching for some optimized solution to get the MongoDB results in PostgreSQL POJO / Model so I don't have to fetch the data manually from MongoDB by passing ids of SQL and then mapping them again.
Is there any way through which I can connect MongoDB with PostgreSQL through columns (Here Id of RDBMS and instanceId of MongoDB) so that with one fetch, I can get related Mongo result too. Any kind of return type is acceptable but I need all of them at one call.
I'm using Hibernate and Spring in my application.
Using Spring Data might be the best solution for your use case, since it supports both:
JPA
MongoDB
You can still get all data in one request but that doesn't mean you have to use a single DB call. You can have one service call which spans to twp database calls. Because the PostgreSQL row is probably the primary entity, I advise you to share the PostgreSQL primary key with MongoDB too.
There's no need to have separate IDs. This way you can simply fetch the SQL and the Mongo document by the same ID. Sharing the same ID can give you the advantage of processing those requests concurrently and merging the result prior to returning from the service call. So the service method duration will not take the sum of the two Repositories calls, being the max of these to calls.
Astonishingly, yes, you potentially can. There's a foreign data wrapper named mongo_fdw that allows PostgreSQL to query MongoDB. I haven't used it and have no opinion as to its performance, utility or quality.
I would be very surprised if you could effectively use this via Hibernate, unless you can convince Hibernate that the FDW mapped "tables" are just views. You might have more luck with EclipseLink and their "NoSQL" support if you want to do it at the Java level.
Separately, this sounds like a monstrosity of a design. There are many sane ways to do what you want within a decent RDBMS, without going for a hybrid database platform. There's a time and a place for hybrid, but I really doubt your situation justifies the complexity.
Just use PostgreSQL's json / jsonb support to support dynamic mappings. Or use traditional options like storing json as text fields, storing XML, or even EAV mapping. Don't build a rube goldberg machine.
I need to convert java objects being imported from the dB to the XML so that I could user it with Xstream in OptaPlanner. Is there any alternate way other than Hibernate to access the data from the dB. How to add more attributes for job scheduling.
optaplanner-core works based on POJO's (javabeans). It's oblivious to the fact that in optaplanner-examples, those POJO's are being read/written to XML files by XStream (and it doesn't care). Similarly, you can use any other technology to store those POJO's:
JPA (for example Hibernate-ORM, OpenJPA, ...) to store them into a database
JDBC to store them into a database. Note: JDBC works with SQL statements, so you 'll need to manually map SQL records to POJO's.
JAXB to store them in XML
XStream to store them in XML (as the examples do it)
Infinispan, mongodb, ... to store them in to a big data cloud. Note: might require manually mapping too, unless you use hibernate-ogm
...
OptaPlanner doesn't care, so it doesn't restrict you :)
I'm trying to understand how mongodb works and I have some questions.
I understand how to delete, insert, update and select but I have some "best practice questions"
1) Did we have to create an index or we can just use the _id wich is auto generated ?
2) If I have for exemple 2 kind of objects (cars and drivers) with a n-n relation between. I have to get 3 collections(car, driver and a collection witch link the two others) ?
3) To rebuild my objects I have to parse my json with the JSON object ?
Thanks for your help
Three good questions. I'll answer each in turn.
1) Did we have to create an index or we can just use the _id wich is
auto generated ?
You should definitely try and (re)use the _id index. This usually means moving one of the unique fields (or primary key, in RDMS speak) in your domain object to be mapped to the _id field. You do have to be careful that the field does not get to large if you will be sharding but that is a separate question to answer.
2) If I have for exemple 2 kind of objects (cars and drivers) with a
n-n relation between. I have to get 3 collections(car, driver and a
collection witch link the two others) ?
No! You only need two collections. One for the cars and one for the drivers. The "join table" is pulled into each of the collections as DBRefs.
Each car document will contain an array of DBRef or document references. Those references contain the database name (optional), collection name and _id for a driver document. You will have a similar set of DBRefs in the driver document to each of the driven cars.
Most of the drivers have support for creating and de-referencing these document references.
3) To rebuild my objects I have to parse my json with the JSON object?
MongoDB lingua franca is actually BSON. You can think of BSON as a typed, binary, easily parsed version of JSON. Most of the drivers have some capability to convert from JSON to their representation of BSON and back. If you are developing in Java then the 10gen driver's utility class is JSON. For the Asynchronous driver it is called Json.
Having said that, unless your data is already JSON I would not convert your data to JSON to convert it to BSON and back again. Instead either look for a ODM (Object-Document-Mapper) for your language of choice or perform the translation for your domain object to the driver's BSON representation directly.
HTH-
Rob.