I have a neo4j database with several million nodes and about as many relationships. While running a program that was adding data to it, the JVM seems to have crashed. When I later tried to query the database using an index, it opened normally and retrieved some of the nodes, but at some point returned the following error:
Exception in thread "main" org.neo4j.graphdb.NotFoundException:
Node[20924] not found. This can be because someone else deleted this
entity while we were trying to read properties from it, or because of
concurrent modification of other properties on this entity. The
problem should be temporary. at
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:601)
at
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:579)
at
org.neo4j.kernel.impl.core.Primitive.hasProperty(Primitive.java:309)
at org.neo4j.kernel.impl.core.NodeImpl.hasProperty(NodeImpl.java:53)
at
org.neo4j.kernel.impl.core.NodeProxy.hasProperty(NodeProxy.java:160)
at
org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.hasProperty(GDSBackedQueryContext.scala:66)
at
org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.hasProperty(GDSBackedQueryContext.scala:48)
at
org.neo4j.cypher.internal.commands.Has.isMatch(Predicate.scala:203)
at
org.neo4j.cypher.internal.pipes.FilterPipe$$anonfun$internalCreateResults$1.apply(FilterPipe.scala:30)
at
org.neo4j.cypher.internal.pipes.FilterPipe$$anonfun$internalCreateResults$1.apply(FilterPipe.scala:30)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390) at
scala.collection.Iterator$class.foreach(Iterator.scala:727) at
scala.collection.AbstractIterator.foreach(Iterator.scala:1156) at
org.neo4j.cypher.internal.pipes.EagerAggregationPipe.internalCreateResults(EagerAggregationPipe.scala:76)
at
org.neo4j.cypher.internal.pipes.PipeWithSource.createResults(Pipe.scala:69)
at
org.neo4j.cypher.internal.pipes.PipeWithSource.createResults(Pipe.scala:66)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl.org$neo4j$cypher$internal$executionplan$ExecutionPlanImpl$$prepareStateAndResult(ExecutionPlanImpl.scala:164)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl$$anonfun$getLazyReadonlyQuery$1.apply(ExecutionPlanImpl.scala:139)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl$$anonfun$getLazyReadonlyQuery$1.apply(ExecutionPlanImpl.scala:138)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl.execute(ExecutionPlanImpl.scala:38)
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:72)
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:67)
at
org.neo4j.cypher.javacompat.ExecutionEngine.execute(ExecutionEngine.java:66)
at querygraph.BasicStatsQueries.main(BasicStatsQueries.java:54)
Caused by: org.neo4j.kernel.impl.nioneo.store.InvalidRecordException:
PropertyRecord[11853043] not in use at
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:453)
at
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getLightRecord(PropertyStore.java:306)
at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.getPropertyRecordChain(ReadTransaction.java:185)
at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.loadProperties(ReadTransaction.java:215)
at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.nodeLoadProperties(ReadTransaction.java:239)
at
org.neo4j.kernel.impl.persistence.PersistenceManager.loadNodeProperties(PersistenceManager.java:111)
at
org.neo4j.kernel.impl.core.NodeManager.loadProperties(NodeManager.java:833)
at
org.neo4j.kernel.impl.core.NodeImpl.loadProperties(NodeImpl.java:143)
at
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:596)
... 23 more
There was only one thread (at least, that I started) running the query, and it was all reading, not writing. And though the exception claims that it's temporary, this happens every time I try to query this index. I therefore assume it has to do with the bad shutdown. I have had database corruptions before from forced shutdowns before I implemented code to prevent that, but neo4j was always able to recover the database, though it took a while. It seems that this is much worse.
When I looped through the index manually and added a try-catch, it began returning the error for every node in the index after the one listed above. Does that mean that all these nodes are non-existent, or corrupted? That would mean a significant (huge) loss of data, since there should be about a million nodes in the index. What can I do to recover the database?
I am using 1.9.2 and would love to upgrade to use labels and etc., but I need this database right now for some time-critical work and don't have time to change anything major right now.
Thanks a lot in advance for any help.
sorry that that happened to you. :( What kind of crash was that?
I would recommend making a backup of the database and then deleting and recreating the index.
When you cannot delete the index programmatically you can also delete the directory under
/data/graph.db/index/lucene/node/<indexname> when the database is shut down.
Then afterwards you can programmatically re-index your nodes using
for (Node n : GlobalGraphOperations.at(db).getAllNodes()) {
if (node.hasProperty("key"))
db.index().forNodes("index").add(node,"key",node.getProperty("key"));
}
It would be great if you could get us the database for analytics.
Thanks a lot
Related
I'm working on some back-end service which is asynchronous in nature. That is, we have multiple jobs that are ran asynchronously and results are written to some record.
This record is basically a class wrapping an HashMap of results (keys are job_id).
The thing is, I don't want to calculate or know in advance how many jobs are going to run (if I knew, I could cache.invalidate() the key when all the jobs has already been completed)
Instead, I'd like to have the following scheme:
Set an expiry for new records (i.e. expireAfterWrite)
On expiry, write (actually upsert) the record the database
If a cache miss occurs, load() is called to fetch the record from the database (if not found, create a new one)
The problem:
I tried to use Caffeine cache but the problem is that records aren't expired at the exact time they were supposed to. I then read this SO answer for Guava's Cache and I guess a similar mechanism works for Caffeine as well.
So the problem is that a record can "wait" in the cache for quite a while, even though it was already completed. Is there a way to overcome this issue? That is, is there a way to "encourage" the cache to invalidate expired items?
That lead me to question my solution. Would you consider my solution a good practice?
P.S. I'm willing to switch to other caching solutions, if necessary.
You can have a look at the Ehcache with write-behind. It is for sure more setup effort but it is working quite well
I am currently using the automatically created class and Entity manager which is created when a table is bound to a database from NetBeans to get and set values to a derby database.
However when I want to update/edit the field using:
LessonTb Obj = new LessonTb();
Obj.setAdditionalResources(Paths);
Obj.setDescription(LessonDescription);
Obj.setLessonName(LessonName);
Obj.setLessonPath(LessonName + ".txt");
Obj.setRecommendedTest(RecommendedTest);
EUCLIDES_DBPUEntityManager.getTransaction().begin();
EUCLIDES_DBPUEntityManager.getTransaction().commit();
lessonTbList.clear();
lessonTbList.addAll(lessonTbQuery.getResultList());
The current Entry does not update in the database despite knowing that the code worked in other projects. I use the same get and set methods from the same LessonTb class which works to add a new entry and delete and entry.
What could possibly be wrong and how do I solve my problem? No exceptions are thrown.
Here's several possibilities. Perhaps you can do more research to rule at least some of them out:
You're using an in-memory database, and you didn't realize that all the database contents are lost when your application terminates.
You're not in auto-commit mode, and your application failed to issue a commit statement after making your update
You're not actually issuing the update statement that you think you're issuing. For some reason, your program flow is not reaching that code.
Your update statement has encountered an error, but it's not the sort of error that results in an exception. Instead, there's an error code returned, but no exception is thrown.
There are multiple copies of the database, or multiple copies of the schema within the database, and you're updating one copy of the database but querying a different one.
One powerful tool for helping you diagnose things more deeply is to learn how to use -Dderby.language.logStatementText=true and read in derby.log what actual SQL statements you're issuing, and what the results of those statements are. Here's a couple links to help you get started doing that: https://db.apache.org/derby/docs/10.4/tuning/rtunproper43517.html and http://apache-database.10148.n7.nabble.com/How-to-log-queries-in-Apache-Derby-td136818.html
We have a Oracle DB with around 4K tables in it. And around 30 different applications accessing the data. We have an issue where one of the application is deleting a row in one of our tables. We do not know which app and why. I'm trying to figure out, but, first thought I got is to use Triggers when ever something is deleted and log it, but, is there a way to find out which application has deleted it in oracle?
Thanks
If you didnt want to go down the autiting or logging route and the application is not distinct from v$session as it stands, you could set the name of the application by calling
dbms_application_info.set_client_info('my_application');
This sets v$session.client_info for the session, which you can read via
dbms_application_info.read_client_info(client_info out varchar2);.
You could then use triggers and to record this value.
I need one help from you guys regarding JDBC performance optimization. One of our pojo is using jdbc to connect to a oracle database and retrieve the records. Basically the records are email addresses basing upon which emails will be sent to the users. The problem here is the performance. This process happens every weekend and the records are very huge in number, around 100k.
The performance is very slow and it worries us a lot. Only 1000 records seem to be fetched from the database every 1 hour, which means that it will take 100 hours for this process to complete (which is very bad). Please help me on this.
The database server and the java process are in two different remote servers. We have used rs_email.setFetchSize(1000); hoping that it would make any difference but no change at all.
The same query executed on server takes 0.35 seconds to complete. Any quick suggestion would of great help to us.
Thanks,
Aamer.
First look at your queries. Analyze them. See if the SQL could be made more efficient (ie, ask the database for what you want, not for what you don't want -- makes a big difference). Also check to see if there are indexes on any fields in your where and join clauses. Indexes make a big difference. But it can't be just any indexes. They have to be good indexes (ie, that the fields that make up the index provide enough uniqueness for the database to retrieve things appropriately). Work with your DBA on this. Look for either high run time against the db or check for queries with high CPU usage (even if the queries run sub-second). These are the thing that can kill your database.
Also from a code perspective, check to see if you are opening and closing your connections or if you are re-using them. Can make a big difference too.
It would help to post your code, queries, table layouts, and any indexes you have.
Use log4jdbc to get the real sql for fetching single record. Then check speed and plan for that sql. You may need a proper index or even db defragmentation.
Not sure about the Oracle driver, but I do know that the MySQL driver supports two different results retrieval methods: "stream" and "wait until you've got it all".
The streaming method lets you start process the results the moment you've got the first row returned from the query, whereas the other method retrieves the entire resultset before you can start work on it. In cases where you deal with huge recordsets, this often leads to memory exceptions, or slow performance because java hit the "memory roof" and the garbage collector can't throw away "used" records like it can in the streaming mode.
The streaming mode doesn't let you navigate/scroll the resultset the way the "normal"/"wait until you've got it all" mode...
Anyway, not sure if this is of any help but it might be worth checking out.
My answer to your question, in summary is:
1. Check network
2. Check SQL
3. Check Java code.
It sounds very slow. First thing to check would be to see if you have a slow network. You can do this pretty quickly by just pinging the database server. Or run the database server on the same machine as your JVMM. If it is not the network, get an explain plan for your SQL and ensure you are not doing table scans when you don't need to be. If it is not the network or the SQL, then it's time to check your Java code. Are you doing anything like blocking when you shouldn't be?
I'm looking for a high level answer, but here are some specifics in case it helps, I'm deploying a J2EE app to a cluster in WebLogic. There's one Oracle database at the backend.
A normal flow of the app is
- users feed data (to be inserted as rows) to the app
- the app waits for the data to reach a certain size and does a batch insert into the database (only 1 commit)
There's a constraint in the database preventing "duplicate" data insertions. If the app gets a constraint violation, it will have to rollback and re-insert one row at a time, so the duplicate rows can be "renamed" and inserted.
Suppose I had 2 running instances of the app. Each of the instances is about to insert 1000 rows. Even if there is only 1 duplicate, one instance will have to rollback and insert rows one by one.
I can easily see that it would be smarter to re-insert the non-conflicting 999 rows as a batch in this instance, but what if I had 3 running apps and the 999 rows also had a chance of duplicates?
So my question is this: is there a design pattern for this kind of situation?
This is a long question, so please let me know where to clarify. Thank you for your time.
EDIT:
The 1000 rows of data is in memory for each instance, but they cannot see the rows of each other. The only way they know if a row is a duplicate is when it's inserted into the database.
And if the current application design doesn't make sense, feel free to suggest better ways of tackling this problem. I would appreciate it very much.
http://www.oracle-developer.net/display.php?id=329
The simplest would be to avoid parallel processing of the same data. For example, your size or time based event could run only on one node or post a massage to a JMS queue, so only one of the nodes would process it (for instance, by using similar duplicate-check, e.g. based on a timestamp of the message/batch).