I am trying to get value like key and value pair but i am doing it from json file and now there is another approach suggested lets do it from db tables. because if in future value change then only update the DB is Needed.
I think using json file is more good as value hardly going to change in future(rarest of rare).. although advantage of db approach is just change the db value and done...
So My point is json will be faster then DB and Using Json will reduce load on DB..as clicking UI it invoke extra call of DB..
What do you Think .. Please let me know..
This very much depends on how you are going to use these data.
Do you need to update it often?
Do you need to update by just one specific field?
Do you need to fetch records based on some specific field?
Do you need to fetch whole json or just some specific fields?
Do some parts of json reference any other tables?
Also, consider the size of those data, e.g. if the json files together may become more in size than the whole other tables, you may break db cache. From the other hand, you can always create separate database for your json files if you still need some relational database features.
So, I would anyway start with answering first 5 questions.
Related
I have here a bunch of XML-files which I like to store in a Cassandra database. Is there any possiblity out there to manage that or do I have to parse and reform the XML-files?
You can certainly store them as a blob or text but you will not be able to query the individual fields within the XML files. One other thing you'd want to be cautious of is payload size and partition size. Cassandra in general isn't really designed as an object store but depending on payload size and desired query functionality, you may either have to parse/chunk them out or look for an alternative solution.
I am writing a web api using Spring and Postgres.
I have a case where I take a Json object item
The uri is /api/item/{itemId}
Request type is PUT
Json:
{
"name":"itemname",
"description":"item description here"
}
So I do (using JdbcTemplate) a SELECT statement to check if the itemId exists and then update if it does.
I would also like to implement a case with partial puts taking Json that look like this::
{
"name":"itemname"
}
OR
{
"description":"item description here"
}
where only the respective fields are updated. In Spring, the variables not present are automatically null.
The way this is implemented now is:
SELECT all columns from the items table
Sequentially check every single expected variable for null and if they are null, replace the null with the value selected from the table in step 1.
UPDATE all columns with the values (none of which should be null if the table has a not null constraint)
Question: How do you do this without == null or != null checks? Is seems to be poor design and involves iterating through every single expected variable for every single PUT request (I will have many of those).
Desired responses (in order of desirability):
There's a way in Postgres where if a null value is input, the column-value is simply not written to the database (and no error is produced)
There is a way to use Spring (and Jackson) to create a Java object with only the provided values and a way to generate SQL and JdbcTemplate code that only updates those specific columns.
Patch is the way of life - implement it
Change the front-end to always send everything
You have two choices when working with the database:
Just update what has changed, doing everything by yourself.
Get Jackson and Hibernate to do it for you.
So let's look at No. 1:
Let's say you're looking right now at the contents of an html form that has been sent back to the server. Take every field in the html form and update only those fields in the database using an SQL statement. Anything that is not in the form will not get updated in your database table. Simple! Works well. You don't need to worry about anything that is not in the form. You can restrict your update to the form's contents.
In general this is a simple option, apart from one problem, which is html checkboxes. If you are doing an update, html checkboxes can catch you out because due to a little design quirk, they don't get sent back to the server if they are unchecked.
No. 2: Perhaps you're looking for Hibernate, which you didn't mention. Jackson will fill a json object for you (must have a record id). Use Hibernate to populate a java class with the existing record, update with the new values Jackson has provided, then you tell Hibernate to merge() it into the existing record in the database, which it will. I also use Roo to create my Hibernate-ready classes for me.
No. 2 is hard to learn and set up, but once you've done sussed it, it's very easy to change things, add fields and so on.
I need to save permanently a big vocabulary and associate to each word some information (and use it to search words efficiently).
Is it better to store it in a DB (in a simply table and let the DBMS make the work of structuring data based on the key) or is it better to create a
trie data structure and then serialize it to a file and deserialize once the program is started, or maybe instead of serialization use a XML file?
Edit: the vocabulary would be in the order of 5 thousend to 10 thousend words in size, and for each word the metadata are structured in array of 10 Integer. The access to the word is very frequent (this is why I thought to trie data structure that have a search time ~O(1) instead of DB that use B-tree or something like that where the search is ~O(logn)).
p.s. using java.
Thanks!
using DB is better.
many companies are merged to DB, like the erp divalto was using serializations and now merged to DB to get performance
you have many choices between DBMS, if you want to see all data in one file the simple way is to use SQLITE. his advantage it not need any server DBMS running.
I am editing Java code which stores data in a YAML file, but I need to make it use MySQL instead, but I'm not sure how to go about doing this. The code makes request to read and write data such as SQLset("top.middle.nameleaf", "Joe") or SQLget("top.middle.ageleaf"). These functions are defined by me. This would be simple with YAML, but I'm not sure how to implement this with SQL. Thanks in advance. Another thing is that if top.middle was set to null then top.middle.nameleaf would be removed, like it would in YAML.
sql doesn't work in the same way as yaml. you cannot blindly replace a yaml solution with a sql one. you will have to actually think about what you want to do.
get a basic understanding of how sql works, with tables and columns, and relationships between them.
define a set of tables that match the data you have in yaml (it might be one table for each structure, and a foreign key linking tables that are nested in yaml).
work out how best to adapt your code to use sql. one approach might be to work with yaml until the data are "ready" and then translate the final yaml structure to sql. alternatively, you may want to replace all your yaml-routines with sql routines, but without doing the above it is hard to say exactly how that will work.
I am trying to create an application in java which pulls out records from the database and maps it to objects. It does that without knowing what the schema of the database looks like. All i want to do is fetch all rows from all tables and store them somewhere. There could be a thousand tables with thousands of records each. The application doesn't know the name of any table or attribute. It should map "on the fly". I looked at hibernate but it doesnt give me what i want for this app. I don't want to create hard-coded xml files and classes for mapping. Any ideas how i can accomplish this ?
Thanks
Oracle has a bunch of data dictionary views for metadata.
ALL_TABLES, ALL_TAB_COLUMNS would be first places to start. Then you'd build ad-hoc queries based on what you get out of there. Not sure whether you have to deal with all data types (dates, blobs, spatial, user-defined....).
Not sure what you mean by "store them somewhere". If you start thinking CSV or XML files, you'll need to escape various characters from VARCHAR2 columns.
If you are looking for some generic extract/unload routines, you should look at what is already available in the database or open-source/commercially.
MyBatis provides a pretty simple way to map data results to objects and back, maybe check that out?
http://code.google.com/p/mybatis/
Not to be flip, but for this task, you might want to check out Ruby on Rails and its ActiveRecord approach