I'm using Hibernate Search / Lucene to maintain a really simple index to find objects by name - no fancy stuff.
My model classes all extend a class NamedModel which looks basically as follows:
#MappedSuperclass
public abstract class NamedModel {
#Column(unique = true)
#Field(store = Store.YES, index = Index.UN_TOKENIZED)
protected String name;
}
My problem is that I get a BooleanQuery$TooManyClauses exception when querying the index for objects with names starting with a specific letter, e.g. "name:l*".
A query like "name:lin*" will work without problems, in fact any query using more than one letter before the wildcard will work.
While searching the net for similar problems, I only found people using pretty complex queries and that always seemed to cause the exception. I don't want to increase maxClauseCount because I don't think it's a good practice to change limits just because you reach them.
What's the problem here?
Lucene tries to rewrite your query from simple name:l* to a query with all terms starting with l in them (something like name:lou OR name:la OR name: ...) - I believe as this is meant to be faster.
As a workaround, you may use a ConstantScorePrefixQuery instead of a PrefixQuery:
// instead of new PrefixQuery(prefix)
new ConstantScoreQuery(new PrefixFilter(prefix));
However, this changes scoring of documents (hence sorting if you rely on score for sorting). As we faced the challenge of needing score (and boost), we decided to go for a solution where we use PrefixQuery if possible and fallback to ConstantScorePrefixQuery where needed:
new PrefixQuery(prefix) {
public Query rewrite(final IndexReader reader) throws IOException {
try {
return super.rewrite(reader);
} catch (final TooManyClauses e) {
log.debug("falling back to ConstantScoreQuery for prefix " + prefix + " (" + e + ")");
final Query q = new ConstantScoreQuery(new PrefixFilter(prefix));
q.setBoost(getBoost());
return q;
}
}
};
(As an enhancement, one could use some kind of LRUMap to cache terms that failed before to avoid going through a costly rewrite again)
I can't help you with integrating this into Hibernate Search though. You might ask after you've switched to Compass ;)
Related
I am new to Java, working on a desktop application thereby understanding Java standards and concepts. I have a portion in the application where I do XML parsing, which also forms the majority of the application. The application parses a huge XML file and generates some readable texts depending on nodes and values. As of now, everything in the parsing logic is hard-coded.
As the application grows and new requirements and change requests come in, I prefer getting rid of all hard-coded stuff and would like to maintain a config file (like lookup table). I would like to revise the existing behavior which is seen as below
public String getReadable() {
action = action.getFirstChild();
switch (action.getNodeName()) {
case "Drive": {
String name = XMLUtility.getChildFrom(action, 0).getTextContent();
String model = XMLUtility.getChildFrom(action, 1).getTextContent();
String speed = XMLUtility.getChildFrom(action, 2).getTextContent();
return name + "is driving the " + model + " vehicle at a speed of " + speed + "mph";
}
case "Stop": {
String model = action.getFirstChild().getTextContent();
return model + " is now stopped";
}
}
return "";
}
There are hundreds of similar functions in my application under the parsing mechanism. I would like to maintain all the hard-coded stuffs centrally which is the more flexible way of maintaining the code.
But, as I do not have any prior experience in Java, I am looking for a standard way of doing this. Please recommend a legible approach to improve my code.
I'm using the common.graph from Google Guava in Version 21.0. It suits very well to my usecase without one aspect: Persistence. The graph seems to be in-memory only. The graph-classes does not implement Serializable, it was explained in this issue posts.
Google describes three models to store the topology. The third option is:
a separate data repository (for example, a database) stores the topology
But that's all. I didn't found any methods in the package to apply a separate data repository. Is there any way to do this? Or is the only way to use the nodes()and edges() method to get a Set of my nodes and a Set of my edges? I can persist them in a database if I implement Serializable in this classes and restore the graph by calling addNode(Node) and addEdge(Source, Target, Edge) (there are no addAll-methods). But this seems to be a workaround.
Thanks for your support!
To briefly recap the reason why Guava's common.graph classes aren't Serializable: Java serialization is fragile because it depends on the details of the implementation, and that can change at any time, so we don't support it for the graph types.
In the short term, your proposed workaround is probably your best bet, although you'll need to be careful to store the endpoints (source and target) of the edges alongside the edge objects so that you'll be able to rebuild the graph as you describe. And in fact this may work for you in the longer term, too, if you've got a database that you're happy with and you don't need to worry about interoperation with anyone else.
As I mentioned in that GitHub issue, another option is to persist your graph to some kind of file format. (Guava itself does not provide a mechanism for doing this, but JUNG will for common.graph graphs once I can get 3.0 out the door, which I'm still working on.) Note that most graph file formats (at least the ones I'm familiar with) have fairly limited support for storing node and edge metadata, so you might want your own file format (say, something based on protocol buffers).
One way I found of storing the graph was through the DOT format, like so:
public class DOTWriter<INode, IEdge> {
public static String write(final Graph graph) {
StringBuilder sb = new StringBuilder();
sb.append("strict digraph G {\n");
for(INode n : graph.nodes()) {
sb.append(" \"" + n.getId() + "\n");
}
for(IEdge e : graph.edges()) {
sb.append(" \"" + e.getSource().getId() + "\" -> \"" + e.getTarget().getId() + "\" " + "\n");
}
sb.append("}");
return sb.toString();
}
}
This will produce something like
strict digraph G {
node_A;
node_B;
node_A -> node_B;
}
It's very easy to read this and build the graph in memory again.
If your nodes are complex objects you should store them separately though.
Based on #Maria Ines Parnisari's amazing answer, I modified a little😊. Then drawing with mermaid(a markdown plugin), I get a clear image like this in Idea(>=2021.2, support markdown better)!
code
//noinspection UnstableApiUsage
MutableGraph<String> graph = GraphBuilder.directed()
.allowsSelfLoops(false)
.build();
//noinspection UnstableApiUsage
graph.addNode("root");
graph.putEdge("root", "s1_1");
graph.putEdge("root", "s1_2");
graph.putEdge("root", "s1_3");
graph.putEdge("s1_2", "s2");
graph.putEdge("s2", "s3");
graph.putEdge("s3", "s4");
graph.putEdge("s3", "s5");
graph.putEdge("s4", "s6");
graph.putEdge("s5", "s6");
graph.putEdge("s1_1", "s6");
graph.putEdge("s1_1", "s2");
// print mermaid text , then copy it
StringBuilder sb = new StringBuilder();
for (EndpointPair<String> edge : graph.edges()) {
// shoudle be `-->` to draw with mermaid
sb.append(edge.nodeU() + " --> " + edge.nodeV() + "\n");
}
System.out.println(sb);
sorry As < 10 repution, i can't copy image directly that image will be hidden
I am trying to get a grasp on Google App Engine programming and wonder what the difference between these two methods is - if there even is a practical difference.
Method A)
public Collection<Conference> getConferencesToAttend(Profile profile)
{
List<String> keyStringsToAttend = profile.getConferenceKeysToAttend();
List<Conference> conferences = new ArrayList<Conference>();
for(String conferenceString : keyStringsToAttend)
{
conferences.add(ofy().load().key(Key.create(Conference.class,conferenceString)).now());
}
return conferences;
}
Method B)
public Collection<Conference> getConferencesToAttend(Profile profile)
List<String> keyStringsToAttend = profile.getConferenceKeysToAttend();
List<Key<Conference>> keysToAttend = new ArrayList<>();
for (String keyString : keyStringsToAttend) {
keysToAttend.add(Key.<Conference>create(keyString));
}
return ofy().load().keys(keysToAttend).values();
}
the "conferenceKeysToAttend" list is guaranteed to only have unique Conferences - does it even matter then which of the two alternatives I choose? And if so, why?
Method A loads entities one by one while method B does a bulk load, which is cheaper, since you're making just 1 network roundtrip to Google's datacenter. You can observe this by measuring time taken by both methods while loading a bunch of keys multiple times.
While doing a bulk load, you need to be cautious about loaded entities, if datastore operation throws exception. Operation might succeed even when some of the entities are not loaded.
The answer depends on the size of the list. If we are talking about hundreds or more, you should not make a single batch. I couldn't find documentation what is the limit, but there is a limit. If it not that much, definitely go with loading one by one. But, you should make the calls asynchronous by not using the now function:
List<<Key<Conference>> conferences = new ArrayList<Key<Conference>>();
conferences.add(ofy().load().key(Key.create(Conference.class,conferenceString));
And when you need the actual data:
for (Key<Conference> keyConference : conferences ) {
Conference c = keyConference.get();
......
}
guys
I am implementing a simple example of 2 level cache in java:
1st level is memeory
2nd - filesystem
I am new in java and I do this just for understanding caching in java.
And sorry for my English, this language is not native for me :)
I have completed 1st level by using LinkedHashMap class and removeEldestEntry method and it is looks like this:
import java.util.*;
public class level1 {
private static final int max_cache = 50;
private Map cache = new LinkedHashMap(max_cache, .75F, true) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > max_cache;
}
};
public level1() {
for (int i = 1; i < 52; i++) {
String string = String.valueOf(i);
cache.put(string, string);
System.out.println("\rCache size = " + cache.size() +
"\tRecent value = " + i +
" \tLast value = " +
cache.get(string) + "\tValues in cache=" +
cache.values());
}
}
Now, I am going to code my 2nd level. What code, methods I should write to implement this tasks:
1) When the 1st level cache is full, the value shouldn't be removed by removeEldestEntry but it should be moved to 2nd level (to file)
2) When the new values are added to 1st level, firstly this value should be checked in file (2nd level) and if it exists it should be moved from 2nd to 1st level.
And I tried to use LRUMap to upgrade my 1st level but the compiler couldn't find class LRUMap in library, what's the problem? Maybe special syntax needed?
You can either use the built in java serialization mechanism and just send your stuff to file by wrapping FileOutputStrem with DataOutputStream and then calling writeObjet().
This method is simple but not flexible enough. for example you will fail to read old cache from file if your classes changed.
You can use serialization to xml, e.g. JaxB or XStream. I used XStream in past and it worked just fine. You can easily store any collection in file and the restore it.
Obviously you can store stuff in DB but it is more complicated.
A remark is that you are not getting thread safety under consideration for your cache! By default LinkedHashMap is not thread-safe and you would need to synchronize your access to it. Even better you could use ConcurrentHashMap which deals with synchronization internally being able to handle by default 16 separate threads (you can increase this number via one of its constructors).
I don't know your exact requirements or how complicated you want this to be but have you looked at existing cache implementations like the ehcache library?
I would like to get to query Windows Vista Search service directly ( or indirectly ) from Java.
I know it is possible to query using the search-ms: protocol, but I would like to consume the result within the app.
I have found good information in the Windows Search API but none related to Java.
I would mark as accepted the answer that provides useful and definitive information on how to achieve this.
Thanks in advance.
EDIT
Does anyone have a JACOB sample, before I can mark this as accepted?
:)
You may want to look at one of the Java-COM integration technologies. I have personally worked with JACOB (JAva COm Bridge):
http://danadler.com/jacob/
Which was rather cumbersome (think working exclusively with reflection), but got the job done for me (quick proof of concept, accessing MapPoint from within Java).
The only other such technology I'm aware of is Jawin, but I don't have any personal experience with it:
http://jawinproject.sourceforge.net/
Update 04/26/2009:
Just for the heck of it, I did more research into Microsoft Windows Search, and found an easy way to integrate with it using OLE DB. Here's some code I wrote as a proof of concept:
public static void main(String[] args) {
DispatchPtr connection = null;
DispatchPtr results = null;
try {
Ole32.CoInitialize();
connection = new DispatchPtr("ADODB.Connection");
connection.invoke("Open",
"Provider=Search.CollatorDSO;" +
"Extended Properties='Application=Windows';");
results = (DispatchPtr)connection.invoke("Execute",
"select System.Title, System.Comment, System.ItemName, System.ItemUrl, System.FileExtension, System.ItemDate, System.MimeType " +
"from SystemIndex " +
"where contains('Foo')");
int count = 0;
while(!((Boolean)results.get("EOF")).booleanValue()) {
++ count;
DispatchPtr fields = (DispatchPtr)results.get("Fields");
int numFields = ((Integer)fields.get("Count")).intValue();
for (int i = 0; i < numFields; ++ i) {
DispatchPtr item =
(DispatchPtr)fields.get("Item", new Integer(i));
System.out.println(
item.get("Name") + ": " + item.get("Value"));
}
System.out.println();
results.invoke("MoveNext");
}
System.out.println("\nCount:" + count);
} catch (COMException e) {
e.printStackTrace();
} finally {
try {
results.invoke("Close");
} catch (COMException e) {
e.printStackTrace();
}
try {
connection.invoke("Close");
} catch (COMException e) {
e.printStackTrace();
}
try {
Ole32.CoUninitialize();
} catch (COMException e) {
e.printStackTrace();
}
}
}
To compile this, you'll need to make sure that the JAWIN JAR is in your classpath, and that jawin.dll is in your path (or java.library.path system property). This code simply opens an ADO connection to the local Windows Desktop Search index, queries for documents with the keyword "Foo," and print out a few key properties on the resultant documents.
Let me know if you have any questions, or need me to clarify anything.
Update 04/27/2009:
I tried implementing the same thing in JACOB as well, and will be doing some benchmarks to compare performance differences between the two. I may be doing something wrong in JACOB, but it seems to consistently be using 10x more memory. I'll be working on a jcom and com4j implementation as well, if I have some time, and try to figure out some quirks that I believe are due to the lack of thread safety somewhere. I may even try a JNI based solution. I expect to be done with everything in 6-8 weeks.
Update 04/28/2009:
This is just an update for those who've been following and curious. Turns out there are no threading issues, I just needed to explicitly close my database resources, since the OLE DB connections are presumably pooled at the OS level (I probably should have closed the connections anyway...). I don't think I'll be any further updates to this. Let me know if anyone runs into any problems with this.
Update 05/01/2009:
Added JACOB example per Oscar's request. This goes through the exact same sequence of calls from a COM perspective, just using JACOB. While it's true JACOB has been much more actively worked on in recent times, I also notice that it's quite a memory hog (uses 10x as much memory as the Jawin version)
public static void main(String[] args) {
Dispatch connection = null;
Dispatch results = null;
try {
connection = new Dispatch("ADODB.Connection");
Dispatch.call(connection, "Open",
"Provider=Search.CollatorDSO;Extended Properties='Application=Windows';");
results = Dispatch.call(connection, "Execute",
"select System.Title, System.Comment, System.ItemName, System.ItemUrl, System.FileExtension, System.ItemDate, System.MimeType " +
"from SystemIndex " +
"where contains('Foo')").toDispatch();
int count = 0;
while(!Dispatch.get(results, "EOF").getBoolean()) {
++ count;
Dispatch fields = Dispatch.get(results, "Fields").toDispatch();
int numFields = Dispatch.get(fields, "Count").getInt();
for (int i = 0; i < numFields; ++ i) {
Dispatch item =
Dispatch.call(fields, "Item", new Integer(i)).
toDispatch();
System.out.println(
Dispatch.get(item, "Name") + ": " +
Dispatch.get(item, "Value"));
}
System.out.println();
Dispatch.call(results, "MoveNext");
}
} finally {
try {
Dispatch.call(results, "Close");
} catch (JacobException e) {
e.printStackTrace();
}
try {
Dispatch.call(connection, "Close");
} catch (JacobException e) {
e.printStackTrace();
}
}
}
As few posts here suggest you can bridge between Java and .NET or COM using commercial or free frameworks like JACOB, JNBridge, J-Integra etc..
Actually I had an experience with with one of these third parties (an expensive one :-) ) and I must say I will do my best to avoid repeating this mistake in the future. The reason is that it involves many "voodoo" stuff you can't really debug, it's very complicated to understand what is the problem when things go wrong.
The solution I would suggest you to implement is to create a simple .NET application that makes the actual calls to the windows search API. After doing so, you need to establish a communication channel between this component and your Java code. This can be done in various ways, for example by messaging to a small DB that your application will periodically pull. Or registering this component on the machine IIS (if exists) and expose simple WS API to communicate with it.
I know that it may sound cumbersome but the clear advantages are: a) you communicate with windows search API using the language it understands (.NET or COM) , b) you control all the application paths.
Any reason why you couldn't just use Runtime.exec() to query via search-ms and read the BufferedReader with the result of the command? For example:
public class ExecTest {
public static void main(String[] args) throws IOException {
Process result = Runtime.getRuntime().exec("search-ms:query=microsoft&");
BufferedReader output = new BufferedReader(new InputStreamReader(result.getInputStream()));
StringBuffer outputSB = new StringBuffer(40000);
String s = null;
while ((s = output.readLine()) != null) {
outputSB.append(s + "\n");
System.out.println(s);
}
String result = output.toString();
}
}
There are several libraries out there for calling COM objects from java, some are opensource (but their learning curve is higher) some are closed source and have a quicker learning curve. A closed source example is EZCom. The commercial ones tend to focus on calling java from windows as well, something I've never seen in opensource.
In your case, what I would suggest you do is front the call in your own .NET class (I guess use C# as that is closest to Java without getting into the controversial J#), and focus on making the interoperability with the .NET dll. That way the windows programming gets easier, and the interface between Windows and java is simpler.
If you are looking for how to use a java com library, the MSDN is the wrong place. But the MSDN will help you write what you need from within .NET, and then look at the com library tutorials about invoking the one or two methods you need from your .NET objects.
EDIT:
Given the discussion in the answers about using a Web Service, you could (and probably will have better luck) build a small .NET app that calls an embedded java web server rather than try to make .NET have the embedded web service, and have java be the consumer of the call. For an embedded web server, my research showed Winstone to be good. Not the smallest, but is much more flexible.
The way to get that to work is to launch the .NET app from java, and have the .NET app call the web service on a timer or a loop to see if there is a request, and if there is, process it and send the response.