I am writing a java based REST API based on resteasy. I have following structure
Category has many groups; group has many preferences
I have 3 resources 1. category 2. group 3. preference
I want to support these API endpoints
/categories/{cat_id}/groups/ (returns all groups for the category)
/groups/{group_id}/preferences/ (returns all prefs for the group)
/preferences/{preference_id} (returns a pref identified by the id passed)
/preferences (returns all prefs)
I have 3 resource classes one for each resource mentioned above
I am confused how to structure the methods and where they should go. Following are my specific questions
/groups/{group_id}/preferences/ Should the implementation go under GroupsResourceImpl class or PreferenceResourceImpl class
PreferenceResourceImpl class has the implementation for /preferences end point which returns all the global preferences. So if /groups/{groupid}/preferences endpoint reside under GroupResource and call a method on the PreferenceResource (a method that takes in group id as an extra param) ??
Basically, the implementation makes no difference, you can do it either way. You should group them the way that groups the code the most logical way to you.
I, personally, like to group resource implementation the way that groups similar code together. This can be achieved by grouping together the paths that require similar ways to extract from the infrastructure. For example, if all your data is in relational database, then all your /*/preferences queries would be basically SELECT * FROM preferences WHERE ...different stuff.... Therefore, it would be most logical to group them all by the table you are querying from. For example:
GroupResource: /categories/{cat_id}/groups/
Because it would always query from table "groups": SELECT * FROM groups WHERE parent_cat_id = {cat_id}
PreferenceResource: /groups/{group_id}/preferences/, /preferences/{preference_id}, /preferences.
Because it would always query from table "preferences": SELECT * FROM preferences WHERE parent_group = {group_id}, ... WHERE id = {preference_id}, SELECT * FROM preferences
However, if your categories/groups/preferences reside in some kind of graph database - then to query /groups/{group_id}/preferences you would need to first find a group, and then get all the preferences from the inside and therefore, it would be more logical to group resources like this:
CategoryResource: /categories/{cat_id}/groups/
Because it would start querying from "categories" graph: categories.get(cat_id).getGroups()
GroupResource: /groups/{group_id}/preferences/
Because it would start querying from "groups" graph: groups.get(group_id).getPreferences()
PreferenceResource: , /preferences/{preference_id}, /preferences.
Because it would start querying from "preferences" graph: preferences.get(preference_id) and preferences.getAll()
Related
I m trying to sort the AEM query builder search results based on particular value of particular property. as we have in any database like MySQL we can sort based on column's value as well (for exp. ORDER BY FIELD('columnName','anyColumnName'). can we have something like this in AEM.
Suppose we have 5 Assets under path /content/dam/Assets.
Asset Name------------dc:title
1.jpg------------------Apple
2.jpg------------------Cat
3.jpg------------------Cat
4.jpg------------------Ball
5.jpg------------------Drag
I need assets on top of the results where dc:title = cat and also need other results also in sorting asc. expected result as given below
2.jpg------------------Cat
3.jpg------------------Cat
1.jpg------------------Apple
4.jpg------------------Ball
5.jpg------------------Drag
Note:- Using version AEM 6.2
You can use the orderby predicate with a value of #jcr:content/metadata/dc:title to sort by dc:title with the QueryBuilder. /libs/cq/search/content/querydebug.html is an interface to test queries on your instance. ACS Commons has a good breakdown of all out of the box predicates
If you want to pull Cats to the top of the results with a single query, you could write a custom predicate. The sample code from ACS Commons shows an example. Adobe has documentation as well.
In an application with a custom database migrator which we want to replace with Flyway.
These migrations are split into some categories like "account" for user management and "catalog" for the product catalog.
Files are named $category.migration.$version.sql. Here, $category is one of the above categories and $version is an integer version starting from 0.
e.g. account.migration.23.sql
Although one could argue that each category should be a separate database, in fact it isn't and a major refactoring would be required to change that.
Also I could use one schema per category, but again this would require rewriting all SQL queries.
So I did the following:
Move $category.migration.$version.sql to /sql/$category/V$version__$category.sql (e.g. account.migration.1.sql becomes /sql/account/V1_account.sql)
Use a metadata table per category
set the baseline version to zero
In code that would be
String[] _categories = new String[] { "catalog", "account" };
for (String _category : _categories) {
Flyway _flyway = new Flyway();
_flyway.setDataSource(databaseUrl.getUrl(), databaseUrl.getUser(), databaseUrl.getPassword());
_flyway.setBaselineVersion(MigrationVersion.fromVersion("0"));
_flyway.setLocations("classpath:/sql/" + applicationName);
_flyway.setTarget(MigrationVersion.fromVersion(_version + ""));
_flyway.setTable(category + "_schema_version");
_flyway.setBaselineOnMigrate(true); // (1)
_flyway.migrate();
}
So there would be the metadata tables catalog_schema_version and account_schema_version.
Now the issue is as follows:
Starting with an empty database I would like to apply all pre-existing migrations per category, as done above.
If I remove _flyway.setBaselineOnMigrate(true); (1), then the catalog migration (the first one) succeeds, but it would complain for account that the schema public is not empty.
Likewise setting _flyway.setBaselineOnMigrate(true); causes the following behavior:
The migration of "catalog" succeeds but V0_account.sql is ignored and Flyway starts with V1_account.sql, maybe because it somehow still thinks the database was already baselined?
Does anyone have a a suggestion for resolving the problem?
Your easiest solution is to keep the schema_version tables in another schema each. I've answered a very similar question here.
Regarding your observation on baseline, those are expected traits. The migration of account starts at v1 because with the combination of baseline=0, baselineOnMigrate=true and a non empty target schema (because catalog has populated it) Flyway has determined this is a pre-existing database that is equal to the baseline - thus start at v1.
I am new to ElasticSearch and Couchbase. I am building a sample Java application to learn more about ElasticSearch and Couchbase.
Reading the ElasticSearch Java API, Filters are better used in cases where sort on score is not necessary and for caching.
I still haven't figured out how to use FilterBuilders and have following questions:
Can FilterBuilders be used alone to search?
Or Do they always have to be used with a Query? ( If true, can someone please list an example? )
Going through a documentation, if I want to perform a search based on field values and want to use FilterBuilders, how can I accomplish that? (using AndFilterBuilder or TermFilterBuilder or InFilterBuilder? I am not clear about the differences between them.)
For the 3rd question, I actually tested it with search using queries and using filters as shown below.
I got empty result (no rows) when I tried search using FilterBuilders. I am not sure what am I doing wrong.
Any examples will be helpful. I have had a tough time going through documentation which I found sparse and even searching led to various unreliable user forums.
private void processQuery() {
SearchRequestBuilder srb = getSearchRequestBuilder(BUCKET);
QueryBuilder qb = QueryBuilders.fieldQuery("doc.address.state", "TX");
srb.setQuery(qb);
SearchResponse resp = srb.execute().actionGet();
System.out.println("response :" + resp);
}
private void searchWithFilters(){
SearchRequestBuilder srb = getSearchRequestBuilder(BUCKET);
srb.setFilter(FilterBuilders.termFilter("doc.address.state", "tx"));
//AndFilterBuilder andFb = FilterBuilders.andFilter();
//andFb.add(FilterBuilders.termFilter("doc.address.state", "TX"));
//srb.setFilter(andFb);
SearchResponse resp = srb.execute().actionGet();
System.out.println("response :" + resp);
}
--UPDATE--
As suggested in the answer, changing to lowercase "tx" works. With this question resolved. I still have following questions:
In what scenario(s), are filters used with query? What purpose will this serve?
Difference between InFilter, TermFilter and MatchAllFilter. Any illustration will help.
Right, you should use filters to exclude documents from being even considered when executing the query. Filters are faster since they don't involve any scoring, and cacheable as well.
That said, it's pretty obvious that you have to use a filter with the search api, which does execute a query and accepts an optional filter. If you only have a filter you can just use the match_all query together with your filter. A filter can be a simple one, or a compund one in order to combine multiple filters together.
Regarding the Java API, the names used are the names of the filters available, no big difference. Have a look at this search example for instance. In your code I don't see where you do setFilter on your SearchRequestBuilder object. You don't seem to need the and filter either, since you are using a single filter. Furthermore, it might be that you are indexing using the default mappings, thus the term "TX" is lowercased. That's why when you search using the term filter you don't find any match. Try searching for "tx" lowercased.
You can either change your mapping if you want to keep the "TX" term as it is while indexing, probably setting the field as not_analyzed if it should only be a single token. Otherwise you can change filter, you might want to have a look at a query that is analyzed, so that your query wil be analyzed the same way the content was indexed.
Have a look at the query DSL documentation for more information regarding queries and filters:
MatchAllFilter: matches all your document, not that useful I'd say
TermFilter: Filters documents that have fields that contain a term (not analyzed)
AndFilter: compound filter used to put in and two or more filters
Don't know what you mean by InFilterBuilder, couldn't find any filter with this name.
The query usually contains what the user types in through the text search box. Filters are more way to refine the search, for example clicking on facet entries. That's why you would still have the query plus one or more filters.
To append to what #javanna said:
A lot of confusion can come from the fact that filters can be defined in several ways:
standalone (with a required query, for instance match_all if all you need is the filters) (http://www.elasticsearch.org/guide/reference/api/search/filter/)
or as part of a filtered query (http://www.elasticsearch.org/guide/reference/query-dsl/filtered-query/)
What's the difference you might ask. And indeed you can construct exactly the same logic in both ways.
The difference is that a query operates on BOTH the resultset as well as any facets you have defined. Whereas, a Filter (when defined standalone) only operates on the resultset and NOT on any facets you may have defined (explained here: http://www.elasticsearch.org/guide/reference/api/search/filter/)
To add to the other answers, InFilter is only used with FilterBuilders. The definition is, InFilter: A filter for a field based on several terms matching on any of them.
The query Java API uses FilterBuilders, which is a factory for filter builders that can dynamically create a query from Java code. We do this using a form and we build our query based on user selections from it with checkboxes, options, and dropdowns.
Here is some Example code for FilterBuilders and there is a snippet from that link that uses InFilter as shown below:
FilterBuilder filterBuilder;
User user = (User) auth.getPrincipal();
if (user.getGroups() != null && !user.getGroups().isEmpty()) {
filterBuilder = FilterBuilders.boolFilter()
.should(FilterBuilders.nestedFilter("userRoles", FilterBuilders.termFilter("userRoles.key", auth.getName())))
.should(FilterBuilders.nestedFilter("groupRoles", FilterBuilders.inFilter("groupRoles.key", user.getGroups().toArray())));
} else {
filterBuilder = FilterBuilders.nestedFilter("userRoles", FilterBuilders.termFilter("userRoles.key", auth.getName()));
}
...
I have a web application build in Django + Python that interact with web services (written in JAVA).
Now all the database management part is done by web-services i.e. all CRUD operations to actual database is done by web-services.
Now i have to track all User Activities done on my website in some log table.
Like If User posted a new article, then a new row is created into Articles table by web-services and side by side, i need to add a new row into log table , something like "User : Raman has posted a new article (with ID, title etc)"
I have to do this for all Objects in my database like "Article", "Media", "Comments" etc
Note : I am using PostgreSQL
So what is the best way to achieve this..?? (Should I do it in PostgreSQL OR JAVA ..??..And How..??)
So, you have UI <-> Web Services <-> DB
Since the web services talk to the DB, and the web services contain the business logic (i.e. I guess you validate stuff there, create your queries and execute them), then the best place to 'log' activities is in the services themselves.
IMO, logging PostgreSQL transactions is a different thing. It's not the same as logging 'user activities' anymore.
EDIT: This still means you create DB schema for 'logs' and write them to DB.
Second EDIT: Catching log worthy events in the UI and then logging them from there might not be the best idea either. You will have to rewrite logging if you ever decide to replace the UI, or for example, write an alternate UI for, say mobile devices, or something else.
For an audit table within the DB itself, have a look at the PL/pgSQL Trigger Audit Example
This logs every INSERT, UPDATE, DELETE into another table.
In your log table you can have various columns, including:
user_id (the user that did the action)
activity_type (the type of activity, such as view or commented_on)
object_id (the actual object that it concerns, such as the Article or Media)
object_type (the type of object; this can be used later, in combination with object_id to lookup the object in the database)
This way, you can keep track of all actions the users do. You'd need to update this table whenever something happens that you wish to track.
Whenever we had to do this, we overrode signals for every model and possible action.
https://docs.djangoproject.com/en/dev/topics/signals/
You can have the signal do whatever you want, from injecting some HTML into the page, to making an entry in the database. They're an excellent tool to learn to use.
I used django-audit-log and I am very satisfied.
Django-audit-log can track multiple models each in it's own additional table. All of these tables are pretty unified, so it should be fairly straightforward to create a SQL view that shows data for all models.
Here is what I've done to track a single model ("Pauza"):
class Pauza(models.Model):
started = models.TimeField(null=True, blank=False)
ended = models.TimeField(null=True, blank=True)
#... more fields ...
audit_log = AuditLog()
If you want changes to show in Django Admin, you can create an unmanaged model (but this is by no means required):
class PauzaAction(models.Model):
started = models.TimeField(null=True, blank=True)
ended = models.TimeField(null=True, blank=True)
#... more fields ...
# fields added by Audit Trail:
action_id = models.PositiveIntegerField(primary_key=True, default=1, blank=True)
action_user = models.ForeignKey(User, null=True, blank=True)
action_date = models.DateTimeField(null=True, blank=True)
action_type = models.CharField(max_length=31, choices=(('I', 'create'), ('U', 'update'), ('D', 'delete'),), null=True, blank=True)
pauza = models.ForeignKey(Pauza, db_column='id', on_delete=models.DO_NOTHING, default=0, null=True, blank=True)
class Meta:
db_table = 'testapp_pauzaauditlogentry'
managed = False
app_label = 'testapp'
Table testapp_pauzaauditlogentry is automatically created by django-audit-log, this merely creates a model for displaying data from it.
It may be a good idea to throw in some rude tamper protection:
class PauzaAction(models.Model):
# ... all like above, plus:
def save(self, *args, **kwargs):
raise Exception('Permission Denied')
def delete(self, *args, **kwargs):
raise Exception('Permission Denied')
As I said, I imagine you could create a SQL view with the four action_ fields and an additional 'action_model' field that could contain varchar references to model itself (maybe just the original table name).
I have 4 tables involved in this query.
Campaign - many to one business
Business - one to many client
Client - one to one contact
Contact
In contact there is the field contact_name which is unique. I need to retrieve all campaigns related to contact(via client and business) which campaign field type equals 2.
What is the best way to do it with hibernate?
In SQL is will look like this:
select *
from campaign,contact, business, client
where campaign.type=2
and client.contact_id = contact.contact_id
and contact.name = 'Josh'
and client.business_id = business.business_id
and campaign.campaign_id = business.business_id
I think that the following should work.
from Compaign where Compaign.type=2 and compaign.business.client.contact.contact_name=:name
You can execute native SQL Queries too using createSQLQuery() method of Session.
You can also use Scalar Property to avoid the overhead of using ResultSetMetadata.
You can find more information on this from here