Using Mongo recommended to store data inside an hashmap - java

I just recently switch from MySQL to MongoDB, I'm wondering with MySQL I stored the player data inside a hashmap and retrieved name, coins etc; like that so I don't have to constantly query the database to retrieve the data.
Now with MongoDB would I need to do the same thing store the values inside a hashmap and retrieve it the same way I did with MySQL?

It depends on your requirement. You have migrated to mongodb from mysql, this doesnt means that your reads would be superfast. If there would have been any significant I/O improvement in mongodb, mysql developers would have adopted it as well. MongoDB provide flexibility over mysql and there are some more advantages there. So If your load remains the same, you should have a caching layer before mongodb layer. Both Mysql and mongodb come with in-built caching which caches results on the basis of query just like a hashmap, but rest data is on disk and as mentioned mongodb doesnt have any technical advantage over mysql in terms of I/O. So have a caching layer to avoid excessive querying to db.

Related

How does Apache Ignite cache related to database?

Official documentation leaves lots of questions unanswered.
I need a in-memory Ignite storage where I can keep some data, loaded from 3rd party database. I understand two things:
I know how to connect to Ignite DB via JDBC driver, how to write and execute DDL statements, how to insert and query data with H2-compatible SQL statements.
I know how to initialize Ignite cache using DataStreamers and how to query data using SQLFieldsQuery
But I have no idea how to combine these two features to make them work together. I don't even know if it is possible. If it's impossible, how should I initialize database for future access via JDBC from external app?
Yes, it's possible. If you're able to query cache using SQLFieldsQuery, then you definitely can use the same SQL query to access it using JDBC Driver.
Here is example, that shows how to access data that was inserted from key-value api with SQL: https://github.com/dmagda/ignite_world_demo, just replace accessing from SQLFieldsQuery with accessing cache from JDBC Driver.

Can MongoDB complex Stored Procedures work with Java web services

I am planning to start develop an iOS application .Would like to use MongoDB as my database.I have lot of complex stored procedures to write with Join Queries.
Am new to MongoDB and absolutely no idea how stored procedures work in MongoDB.and am using Java Rest webservices to call my DB.
Any advice from professionals will be appreciated !.
Thanks in Advance.
MongoDb is a NoSQL database program, instead of classic tables it stores the data in JSON like documents with schemas, which means it is no possible to have stored procedures in it or use SQL commands like Joins. You won't be able to use what you want to. What you'll have to do is store your data, retrieve it from your schemas and then make your own relationships handly. So, if you are planning to work with sorted and related data with complex stored procedures, MongoDB is not good choice.

How join a record set that is returned from a web service with one of your sql tables

I thought about this solution: get data from web service, insert into table and then join with other table, but it will affect perfomance and, also, after this I must delete all that data.
Are there other ways to do this?
You don't return a record set from a web service. HTTP knows nothing about your database or result sets.
HTTP requests and responses are strings. You'll have to parse out the data, turn it into queries, and manipulate it.
Performance depends a great deal on things like having proper indexes on columns in WHERE clauses, the nature of the queries, and a lot of details that you don't provide here.
This sounds like a classic case of "client versus server". Why don't you write a stored procedure that does all that work on the database server? You are describing a lot of work to bring a chunk of data to the middle tier, manipulate it, put it back, and then delete it? I'd figure out how to have the database do it if I could.
no, you don't need save anything into database, there's a number of ways to convert XML to table without saving it into database
for example in Oracle database you can use XMLTable/XMLType/XQuery/dbms_xml
to convert xml result from webservice into table and then use it in your queries
for example:
if you use Oracle 12c you can use JSON_QUERY: Oracle 12ะก JSON
XMLTable: oracle-xmltable-tutorial
this week discussion about converting xml into table data
It is common to think about applications having a three-tier structure: user interface, "business logic"/middleware, and backend data management. The idea of pulling records from a web service and (temporarily) inserting them into a table in your SQL database has some advantages, as the "join" you wish to perform can be quickly implemented in SQL.
Oracle (as other SQL DBMS) features temporary tables which are optimized for just such tasks.
However this might not be the best approach given your concerns about performance. It's a guess that your "middleware" layer is written in Java, given the tags placed on the Question, and the lack of any explicit description suggests you may be attempting a two-tier design, where user interface programs connect directly with the backend data management resources.
Given your apparent investment in Oracle products, you might find it worthwhile to incorporate Oracle Middleware elements in your design. In particular Oracle Fusion Middleware promises to enable "data integration" between web services and databases.

Run ElasticSearch on top relational database

The problem I have is, whether it is possible to use ElasticSearch on top of a relational database.
1. When I insert or delete a record in the relational database, will it reflect in the elastic search?
2. If I insert a document in the elastic search will it be persisted in the database?
3. Does it uses a cache or an in-memory database to facilitate search? If so what is uses?
There is no direct connection between Elasticsearch and relational databases - ES has it's own datastore based on Apache Lucene.
That said, you can as others have noted use the Elasticsearch River plugin for JDBC to load data from a relational database into Elasticsearch. Keep in mind there are a number of limitations to this approach:
It's one way only - The JDBC River for ES only reads from the source
database - it does not push data from ES into the source database.
Deletes are not handled - if you delete data in your source database
after it's been indexed into ES that deletion will not be reflected
in ES.
ElasticSearch river JDBC MySQL not deleting records
and https://github.com/jprante/elasticsearch-river-jdbc/issues/213
It was not intended as a production, scalable solution for
relational database and Elasticsearch integration. From the JDBC
River's author's comment in January of 2014, it was designed as a "
a single node (non-scalable) solution" "for demonstration purposes."
http://elasticsearch-users.115913.n3.nabble.com/Strategy-for-keeping-Elasticsearch-updated-with-MySQL-td4047253.html
To answer your questions directly (assuming you use the JDBC River):
New document inserts can be handled by the JDBC River but existing
data deletes are not.
Data does not flow from Elasticsearch into your relational database. That would need to be custom development work.
Elasticsearch is built on top of Apache Lucene. Lucene in turn
depends a great deal on file system caching at the OS level (which
is why ES recommends keeping heap size down to no more than 50% of
total memory, to leave a lot for the file system cache). In addition
the ES/Lucene stack makes use of a number of internal caches (like
the Lucene field cache and the filter cache)
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-cache.html
and
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html
Internally the filter cache is implemented using a bitset:
http://www.elasticsearch.org/blog/all-about-elasticsearch-filter-bitsets/
1)You should take a look at the ElasticSearch jdbc river here for inserts (I believe deleted rows aren't managed any more, see developper comment).
2)Unless you do it manually, it is not natively managed by ElasticSearch.
3)Indeed, ElasticSearch use cache to improve performances, especially when using filters. Bitsets (arrays of 0/1) are stored.
Came across this question while looking for a similar thing. Thought an update was due.
My Findings:
Elasticsearch has now deprecated Rivers, though the above-mentioned jprante's River lives on...
Another option I found was the Scotas Push Connector which pushes inserts, updates and deletes from an RDBMS to Elasticsearch. Details here: http://www.scotas.com/product-scotas-push-connector.
Example implementation here: http://www.scotas.com/blog/?p=90

How Infinispan Can Use in Data Persistence?

I'm Very New to Infinispan Framework.I want To Know can Infinispan Use For Cashed Entities Data Syncronization with Oracle database tables. Simple scenario is This When I Put an Entity into Cache I want to Persist That entity into Database without persisting It in to the Database(only puting to Cache) .what I am looking for is a slightly different cache store. The idea behind it is to have the data stored as if we used Hibernate JPA . So the cache store needs to update the right table/row depending on the information of the map key and/or information gained from the JPA annotationsPlease let me know Infinispan Supports this scenario or not?If Supports Please share some sample code with me.
Maybe what you're looking for is Hibernate OGM which allows you to store data into a data grid, such as Infinispan, instead of the database, while using the JPA API?

Categories

Resources