Because of the nature of my application, I need to "namespace" the datastore.
This is the code I see in the docs:
// Set the namepace temporarily to "abc"
String oldNamespace = NamespaceManager.get();
NamespaceManager.set("abc");
try {
... perform operation using current namespace ...
} finally {
NamespaceManager.set(oldNamespace);
}
However, I'm not sure where the namespace must be set to the XML before you can use it or you can create namespace dynamically in code?
Also I see with the MemcacheService there is a setNamespace method (although deprecated already); how about the DatastoreService is there a way to namespace a given service instance we get from the DatastoreServiceFactory factory, so we don't have to set namespace back-and-forth with our code?
You don't have to declare namespaces to use them. If you want to make a multi-tenant application then namespaces are a perfect fit. Basically, you just have to set the namespace once at the beginning of your request. That namespace setting automatically applies to all your API calls during that request. Switching back and forth as shown in the docs will only be necessary for accessing data that's shared by all tenants.
Related
Using Hibernate Search 5.11.3 with programmatic API (no annotations), is there a way to facet on dynamic fields added in a class or field bridge? I don't see any 'facet' config available in FieldMetadataBuilder when using MetadataProvidingFieldBridge.
I have tried various combinations of luceneOptions.addSortedDocValuesFieldToDocument() and luceneOptions.addFieldToDocument() in the set() method. This successfully updates the index, but I cannot perform facet queries.
I am trying to do a basic attribute facet/filter where I have a generic table of attributes with id/name and attribute values associated with products. For various reasons I am using the programmatic API and especially for attributes I can't make use of the #Facet annotation. So for a product, I added this class bridge to Product.class:
public class ProductClassTagValuesBridge implements FieldBridge
{
#Override
public void set(String name, Object value, Document document, LuceneOptions luceneOptions)
{
Product product = (Product) value;
for (TagValue v : product.getTagValues())
{
Tag tag = v.getTag();
String tagName = "tag-" + tag.getId();
String tagValue = v.getId().toString();
// not sure if this line is required? Have tried with and without
luceneOptions.addFieldToDocument(tagName, tagValue, document);
luceneOptions.addSortedDocValuesFieldToDocument(tagName, tagValue, document);
}
}
}
Then I build my (test) faceting request to search tag-56 (which I confirmed is in the index using Luke):
FacetParameterContext context = queryBuilder.facet()
.name("tag-56")
.onField("tag-56")
.discrete();
FacetingRequest facetingRequest = context.createFacetingRequest();
Which when used in the search/FacetManager gives me the error:
org.hibernate.search.exception.SearchException: HSEARCH000268: Facet request 'TAG_56' tries to facet on field 'tag-56' which either does not exist or is not configured for faceting (via #Facet). Check your configuration.
I have also tried the custom config solution from the solution in this post: Hibernate Search: configure Facet for custom FieldBridge
For the custom field I added a field bridge to tagValues on my product. The same error occurs.
mapping.entity(Product.class).indexed()
.property("tagValues", ElementType.FIELD).field()
.analyze(Analyze.NO).store(Store.YES)
.bridge(ProductTagValuesFieldBridge.class)
Short answer: Hibernate Search does not allow that... yet.
Long answer:
Hibernate Search 5 allows dynamic fields, but does not allow faceting on fields declared in custom bridges.
That is to say, you can add arbitrary values to your index that don't fit a pre-defined schema, but you cannot use faceting on those fields.
Hibernate search 6 allows faceting (now called "aggregations") on fields declared in custom bridges (just declare them as .aggregable(Aggregable.YES)), but does not allow dynamic fields yet.
EDIT: Starting with 6.0.0.Beta7, dynamic fields are supported thanks to field templates. So the rest of my message is not useful anymore.
See this section of the documentation for more information about field templates. It's totally possible to declare an aggregable, dynamic field in your bridge.
Original message about ways to work without dynamic fields (obsolete):
That is to say, if you know the list of tags upon startup, are able to list them all, and are certain they won't change while your application is up, you could declare the fields upfront and use faceting on them. But if you don't know the list of tags upon startup, none of this is possible (yet).
Until dynamic fields are added to Hibernate Search 6, the only solution is to use Hibernate Search 5 and to re-implement faceting yourself. As you can expect, this will be complex and you will have to get your hands dirty with Lucene. You will have to:
Add fields of type SortedSetDocValuesFacetField to your document in your custom bridge.
Ensure Hibernate Search calls FacetsConfig.build on your documents after they are populated. One way to do that (through a hack) would be to declare a dummy #Facet field on your entity, even if you don't use it.
Completely ignore Hibernate Search's query feature and perform faceting yourself from an IndexReader. You can get an IndexReader from Hibernate Search as explained here. There's an example of how to perform faceting in org.hibernate.search.query.engine.impl.QueryHits#updateStringFacets.
I have a mbean for a class say foo.bar.Log4j
and I want to use jolokia to list all loggers?
I have tried reading on https://jolokia.org/reference/pdf/jolokia-reference.pdf but that tells me how to get values of predefined java.memory etc
Please suggest on how to get jolokia to retrieve loggers for a user-defined class
You have to keep in mind that even if your mbean is a singleton in your servlet, your servlet could be running on multiple endpoints - that's why the namespace alone is not sufficient to identify your mbean instance.
If you want to get all instances of foo.bar.Log4j, you can use the read endpoint like this:
http://yourserver/jolokia/read/foo.bar.Log4j:*
In general, you can get a list of all your available mbeans like this:
http://yourserver/jolokia/list
You should end up with a large json document that contains everything you might want to fetch. You will see things like
"foo.bar.Log4j": {
"name=foo,type=MyLogger": {
"desc": ...
"attr": {
...
}}}
You can now get the attributes using something like this:
http://yourserver/jolokia/read/foo.bar.Log4j:type=name=foo,type=MyLogger
In addition to type and name, you may see other fields as well, for example context or id. This a:b key is the Java ObjectName for your mbean.
Is there any way of getting the metadata for a solr core ?
For instance I know the core name, and can obtain a SolServer from that and I also know the field name.
Is there any way to determine the metadata though. Specifically I would like to know whether the field type is an int or a double.
Thanks
You can make a request to the luke request handler:
http://localhost:8983/solr/corename/admin/luke?show=schema&wt=json&_=1453816769771
The output will include the schema for the core, along with the defined fields, their settings and their types:
{"fields":{"xyz":{"type":"string","flags":"I-S-M---OF-----l","copyDests":[],"copySources":[]}, .... }
A neat trick to find these endpoints is to watch the 'network' tab when browsing the admin interface to Solr, as the admin interface is just a static HTML / Javascript frontend that makes all the requests for actual content from the Solr server behind the scenes.
This might seem like an odd question, but I am trying to get a handle on what the "best practice" is for converting an application that is set up to use something like Roo's or Grails' generation of controllers (which provides basic CRUD functionality) to something that returns a JSON response body instead for use in a JavaScript application.
The ambiguity of technology here is because I haven't actually started the project yet. I am still trying to decide which (Java-based) technology to use, and to see what kind of productivity tools I should learn/use in the process. It will be a web application, and it will use a database persistence layer. All other details are up in the air.
Perhaps the easiest way to accomplish my goal is to develop using some sort of AJAX plugin to start with, but most of the tutorials and descriptions out there say to start with a normal MVC architecture. Roo seems to make conversion of the controllers it generates to JSON-friendly return types difficult, and I'm not familiar enough with Groovy/Grails to know what it takes to do that.
This is mostly a learning experience for me, and I am open to any criticism or advice, but being a Q/A forum, I realize I need to incorporate an objective question of some sort. To fill that need, I ask:
What is the best way to set up an AJAX/RESTful interface for my entities in Roo and/or Grails?
I recently did exactly this with a Grails application and found it surprisingly easy to take the generated controllers and get them to output JSON or XML or the HTML from the view depending on the content negotiation.
The places in the Grails manual to look into are the section(s) on Content Negotiation and, if you need to deal with JSON or XML input, marshaling.
To get JSON and XML output, in the default list() method, changed it to this (I have a Session object, in this case...one of my domain classes):
def list() {
params.max = Math.min(params.max ? params.int('max') : 10, 100)
def response = [sessionInstanceList: Session.list(params), sessionInstanceTotal: Session.count()]
withFormat {
html response
json {render response as JSON}
xml {render response as XML}
}
}
Anywhere you are returning just an object by default, you will want to replace the returned value with the withFormat block.
You also may need to update your Config.groovy file where it deals with mime types. Here's what I have:
grails.mime.file.extensions = true // enables the parsing of file extensions from URLs into the request format
grails.mime.use.accept.header = true
grails.mime.types = [ html: ['text/html','application/xhtml+xml'],
xml: ['text/xml', 'application/xml'],
text: 'text/plain',
js: 'text/javascript',
rss: 'application/rss+xml',
atom: 'application/atom+xml',
css: 'text/css',
csv: 'text/csv',
all: '*/*',
json: ['application/json','text/json'],
form: 'application/x-www-form-urlencoded',
multipartForm: 'multipart/form-data'
]
As input (to an update() or save() action, for example) JSON and XML payloads will automatically be unmarshaled and will be bound just like a form input would be, but I've found that the unmarshaling process is a bit picky (especially with JSON).
I found that, in order for JSON to be handled correctly in the update() method, the class attribute had to be present and correct on the inbound JSON object. Since the library I was using in my client application didn't make this an easy issue to deal with, I switched to using XML instead.
I'm using JAX-WS api for wsdl generation.
Java-bean class is something like:
public class MyBean {
private String nullableField;
private String notNullableField;
// and here appropriate get/set/ters
}
When wsdl is generated then nullability of this fields is not specified.
Question: what (and where) necessary to specify that fields have corresponding nillable='' value in wsdl? I.e. how can I specify fields nullability in plain java code for wsdl?
At this time I'm generating wsdl and then correcting xml manually for fields nullability. That's not convenient. I want this nillable attribute'll be generated by java-ws automatically.
Any suggestions?
Thanks.
AFAIK, it is still not possible to generate nillable=false when using #WebParam i.e. when using a Java-first approach (as discussed in this thread). Actually, I'd recommend to use a WSDL-first approach if you want fine control.