I have couchbase DB deployed in production. I want to write java code to query few details. It doesn't have any views as of today and in order to have views created , I need to go through a lot of process. Is there a way to run queries using code written in couchbase java sdk or is it mandatory get views created to run custom queries.
If you're using Couchbase 4.0 or above, you can use N1QL. Create at least a primary N1QL index once, query anything... You can even create more specific N1QL secondary indexes tailored for queries for which you need better performance.
Views are very specific, they force you to think about exactly how you'll query your data and limit you to that use case. N1QL on the other hand is very general purpose. It's a superset of SQL, with JSON-specific additions.
Of course both work on the assumption that your data is JSON
Without view nor N1QL, you're limited to requests using the keys of documents, which you must know in advance (but that could be a usable alternative nonetheless, eg. if keys are mentioned in another document, or can be reconstructed from the content of another document of which you know the key).
Related
In .NET C#, we used Odata to filter, page, sort the database results from SQL database. Odata in .NET would actually go into the database, and query WHERE, ORDER By Filters to database, instead of extracting all the database results, and applying filtering on the api memory.
I am curious of Java Apache Olingo, queries the database internally or applies filtering on the API memory set.
Resources:
https://www.odata.org/libraries/
https://www.odata.org/documentation/odata-version-2-0/uri-conventions/
Short answer, Olingo 2 JPAProcessor will query the database to only fetch limited data if $filter and $orderby are specified.
TLDR;
There are two things we need to consider here. Apache Olingo is a framework which allows people to implement OData Server and Client in Java, bare bones if will just provide you with the Abstract Classes and Interfaces which you implement OdataProcessors, how ever one wants.
Now coming to the second point there is a JPA Implementation also provided in the Olingo Project, which you can locate here
To answer your question, lets dig into the code of this project to see, how its implemented
We will have to start with JPAProcessorImpl and hop into process(final GetEntitySetUriInfo uriParserResultView), this is where the query and the actuals of how the data is fetched in the database are implemented.
At line 150 the actual query is already built,so if you pass $filter(where clause) and $orderby, we can see the values are actually being passed to the database and are baked in query
Edit: As #Olivier mentions, my original answer referred to OData in .NET and not Olingo.
Unfortunately as you probably have found out yourself, the standard Olingo documentation doesn't answer this particular question directly, though it does mention support for lazy-loading and streaming behaviour (which would not make any sense to filter on the application level).
However, you would be hard-pressed to find an ORM or DB Adapter nowadays that does not support filtering on the database level, as this will almost always be faster than doing so in your application for non-trivial workloads.
I will try to update this answer with a more authorative source.
(original answer)
According to this analysis:
As long as your data provider supports deferred queries and you don't force evaluation by calling something like .ToList(), the query will not be evaluated until the OData filters are applied, and they'll be handled at the database level.
I do have a requirement where I need to copy data from one table of Oracle to another table on daily basis. Currently, I am fetching data from the database and writing them to Excel file through java code. So I have a list of POJO ready with me to insert. But I am open to an approach where I can directly dump data from my Oracle table to the second table(again I am open to the appropriate database for this like Oracle or Amazon dynamoDB etc). Below are the approaches I could think of. I still am searching for different approaches, I will update the post accordingly.
1) The naive approach is to just fire insert queries from java code it self. Yeah, I am using hibernate so it I can do it little easier.
2) Second I thought about using Amazon Lambda. I have not read about it completely, I just have a basic idea of it. But I am opening this question because I am novice and I want to select an efficient approach for this.
Will you please throw some light on my approaches or suggest a completely different one?
As Lambda has different triggers you can use one of those to load the excel. One solution would be setup an API through API gateway which triggers Lambda. Call API gateway with serialised data of excel, which in turn call Lambda and deserialise data in Lambda and save it to DynamoDB. Another solution is S3 which you have mentioned in comments
Best approach is to trigger a lambda function using cloudwatch on daily basis which can copy the data from one table to another in oracle or from oracle to dynamodb. No need of S3 or API gateway which is more complex and will cost you more.
I thought about this solution: get data from web service, insert into table and then join with other table, but it will affect perfomance and, also, after this I must delete all that data.
Are there other ways to do this?
You don't return a record set from a web service. HTTP knows nothing about your database or result sets.
HTTP requests and responses are strings. You'll have to parse out the data, turn it into queries, and manipulate it.
Performance depends a great deal on things like having proper indexes on columns in WHERE clauses, the nature of the queries, and a lot of details that you don't provide here.
This sounds like a classic case of "client versus server". Why don't you write a stored procedure that does all that work on the database server? You are describing a lot of work to bring a chunk of data to the middle tier, manipulate it, put it back, and then delete it? I'd figure out how to have the database do it if I could.
no, you don't need save anything into database, there's a number of ways to convert XML to table without saving it into database
for example in Oracle database you can use XMLTable/XMLType/XQuery/dbms_xml
to convert xml result from webservice into table and then use it in your queries
for example:
if you use Oracle 12c you can use JSON_QUERY: Oracle 12ะก JSON
XMLTable: oracle-xmltable-tutorial
this week discussion about converting xml into table data
It is common to think about applications having a three-tier structure: user interface, "business logic"/middleware, and backend data management. The idea of pulling records from a web service and (temporarily) inserting them into a table in your SQL database has some advantages, as the "join" you wish to perform can be quickly implemented in SQL.
Oracle (as other SQL DBMS) features temporary tables which are optimized for just such tasks.
However this might not be the best approach given your concerns about performance. It's a guess that your "middleware" layer is written in Java, given the tags placed on the Question, and the lack of any explicit description suggests you may be attempting a two-tier design, where user interface programs connect directly with the backend data management resources.
Given your apparent investment in Oracle products, you might find it worthwhile to incorporate Oracle Middleware elements in your design. In particular Oracle Fusion Middleware promises to enable "data integration" between web services and databases.
I have a query which is doing ILIKE on some 11 string or text fields of table which is not big (500 000), but for ILIKE obviously too big, search query takes round 20 seconds. Database is postgres 8.4
I need to implement this search to be much faster.
What came to my mind:
I made additional TVECTOR column assembled from all columns that need to be searched, and created the full text index on it. The fulltext search was quite fast. But...I can not map this TVECTOR type in my .hbms. So this idea fell off (in any case i thaught it more as a temporary solution).
Hibernate search. (Heard about it first time today) It seems promissing, but I need experienced opinion on it, since I dont wanna get into the new API, possibly not the simplest one, for something which could be done simpler.
Lucene
In any case, this has happened now with this table, but i would like to solution to be more generic and applied for future cases related to full text searches.
All advices appreciated!
Thanx
I would strongly recommend Hibernate Search which provides a very easy to use bridge between Hibernate and Lucene. Rememeber you will be using both here. You simply annotate properties on your domain classes which you wish to be able to search over. Then when you update/insert/delete an entity which is enabled for searching Hibernate Search simply updates the relevant indexes. This will only happen if the transaction in which the database changes occurs was committed i.e. if it's rolled back the indexes will not be broken.
So to answer your questions:
Yes you can index specific columns on specific tables. You also have the ability to Tokenize the contents of the field so that you can match on parts of the field.
It's not hard to use at all, you simply work out which properties you wish to search on. Tell Hibernate where to keep its indexes. And then can use the EntityManager/Session interfaces to load the entities you have searched for.
Since you're already using Hibernate and Lucene, Hibernate Search is an excellent choice.
What Hibernate Search will primarily provide is a mechanism to have your Lucene indexes updated when data is changed, and the ability to maximize what you already know about Hibernate to simplify your searches against the Lucene indexes.
You'll be able to specify what specific fields in each entity you want to be indexed, as well as adding multiple types of indexes as needed (e.g., stemmed and full text). You'll also be able to manage to index graph for associations so you can make fairly complex queries through Search/Lucene.
I have found that it's best to rely on Hibernate Search for the text heavy searches, but revert to plain old Hibernate for more traditional searching and for hydrating complex object graphs for result display.
I recommend Compass. It's an open source project built on top of Lucene that provider a simpler API (than Lucene). It integrates nicely with many common Java libraries and frameworks such as Spring and Hibernate.
I have used Lucene in the past to index database tables. The solution works great, but remeber that you need to maintain the index. Either, you update the index every time your objects are persisted or you have a daemon indexer that dump the database tables in your Lucene index.
Have you considered Solr? It's built on top of Lucene and offers automatic indexing from a DB and a Rest API.
A year ago I would have recommended Compass. It was good at what it does, and technically still happily runs along in the application I developed and maintain.
However, there's no more development on Compass, with efforts having switched to ElasticSearch. From that project's website I cannot quite determine if it's ready for the Big Time yet or even actually alive.
So I'm switching to Hibernate Search which doesn't give me that good a feeling but that migration is still in its initial stages, so I'll reserve judgement for a while longer.
All the projects are based on Lucene. If you want to implement a very advanced features I advice you to use Lucene directly. If not, you may use Solr which is a powerful API on top of lucene that can help you index and search from DB.
Is there a database out there that I can use for a really basic project that stores the schema in terms of documents representing an individual database table?
For example, if I have a schema made up of 5 tables (one, two, three, four and five), then the database would be made up of 5 documents in some sort of "simple" encoding (e.g. json, xml etc)
I'm writing a Java based app so I would need it to have a JDBC driver for this sort of database if one exists.
CouchDB and you can use it with java
dbslayer is also light weight with MySQL adapter. I guess, this will make life a little easy.
I haven't used it for a bit, but HyperSQL has worked well in the past, and it's quite quick to set up:
"... offers a small, fast multithreaded and transactional database engine which offers in-memory and disk-based tables and supports embedded and server modes."
CouchDB works well (#zengr). You may also want to look at MongoDB.
Comparing Mongo DB and Couch DB
Java Tutorial - MongoDB
Also check http://jackrabbit.apache.org/ , not quite a DB but should also work.