Find out who deleted the row in the DB - java

We have a Oracle DB with around 4K tables in it. And around 30 different applications accessing the data. We have an issue where one of the application is deleting a row in one of our tables. We do not know which app and why. I'm trying to figure out, but, first thought I got is to use Triggers when ever something is deleted and log it, but, is there a way to find out which application has deleted it in oracle?
Thanks

If you didnt want to go down the autiting or logging route and the application is not distinct from v$session as it stands, you could set the name of the application by calling
dbms_application_info.set_client_info('my_application');
This sets v$session.client_info for the session, which you can read via
dbms_application_info.read_client_info(client_info out varchar2);.
You could then use triggers and to record this value.

Related

Netbeans Entity Manager not updating Derby Database

I am currently using the automatically created class and Entity manager which is created when a table is bound to a database from NetBeans to get and set values to a derby database.
However when I want to update/edit the field using:
LessonTb Obj = new LessonTb();
Obj.setAdditionalResources(Paths);
Obj.setDescription(LessonDescription);
Obj.setLessonName(LessonName);
Obj.setLessonPath(LessonName + ".txt");
Obj.setRecommendedTest(RecommendedTest);
EUCLIDES_DBPUEntityManager.getTransaction().begin();
EUCLIDES_DBPUEntityManager.getTransaction().commit();
lessonTbList.clear();
lessonTbList.addAll(lessonTbQuery.getResultList());
The current Entry does not update in the database despite knowing that the code worked in other projects. I use the same get and set methods from the same LessonTb class which works to add a new entry and delete and entry.
What could possibly be wrong and how do I solve my problem? No exceptions are thrown.
Here's several possibilities. Perhaps you can do more research to rule at least some of them out:
You're using an in-memory database, and you didn't realize that all the database contents are lost when your application terminates.
You're not in auto-commit mode, and your application failed to issue a commit statement after making your update
You're not actually issuing the update statement that you think you're issuing. For some reason, your program flow is not reaching that code.
Your update statement has encountered an error, but it's not the sort of error that results in an exception. Instead, there's an error code returned, but no exception is thrown.
There are multiple copies of the database, or multiple copies of the schema within the database, and you're updating one copy of the database but querying a different one.
One powerful tool for helping you diagnose things more deeply is to learn how to use -Dderby.language.logStatementText=true and read in derby.log what actual SQL statements you're issuing, and what the results of those statements are. Here's a couple links to help you get started doing that: https://db.apache.org/derby/docs/10.4/tuning/rtunproper43517.html and http://apache-database.10148.n7.nabble.com/How-to-log-queries-in-Apache-Derby-td136818.html

Recover corrupt neo4j Database after server crash graphdb.NotFoundException

I have a neo4j database with several million nodes and about as many relationships. While running a program that was adding data to it, the JVM seems to have crashed. When I later tried to query the database using an index, it opened normally and retrieved some of the nodes, but at some point returned the following error:
Exception in thread "main" org.neo4j.graphdb.NotFoundException:
Node[20924] not found. This can be because someone else deleted this
entity while we were trying to read properties from it, or because of
concurrent modification of other properties on this entity. The
problem should be temporary. at
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:601)
at
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:579)
at
org.neo4j.kernel.impl.core.Primitive.hasProperty(Primitive.java:309)
at org.neo4j.kernel.impl.core.NodeImpl.hasProperty(NodeImpl.java:53)
at
org.neo4j.kernel.impl.core.NodeProxy.hasProperty(NodeProxy.java:160)
at
org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.hasProperty(GDSBackedQueryContext.scala:66)
at
org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.hasProperty(GDSBackedQueryContext.scala:48)
at
org.neo4j.cypher.internal.commands.Has.isMatch(Predicate.scala:203)
at
org.neo4j.cypher.internal.pipes.FilterPipe$$anonfun$internalCreateResults$1.apply(FilterPipe.scala:30)
at
org.neo4j.cypher.internal.pipes.FilterPipe$$anonfun$internalCreateResults$1.apply(FilterPipe.scala:30)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390) at
scala.collection.Iterator$class.foreach(Iterator.scala:727) at
scala.collection.AbstractIterator.foreach(Iterator.scala:1156) at
org.neo4j.cypher.internal.pipes.EagerAggregationPipe.internalCreateResults(EagerAggregationPipe.scala:76)
at
org.neo4j.cypher.internal.pipes.PipeWithSource.createResults(Pipe.scala:69)
at
org.neo4j.cypher.internal.pipes.PipeWithSource.createResults(Pipe.scala:66)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl.org$neo4j$cypher$internal$executionplan$ExecutionPlanImpl$$prepareStateAndResult(ExecutionPlanImpl.scala:164)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl$$anonfun$getLazyReadonlyQuery$1.apply(ExecutionPlanImpl.scala:139)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl$$anonfun$getLazyReadonlyQuery$1.apply(ExecutionPlanImpl.scala:138)
at
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl.execute(ExecutionPlanImpl.scala:38)
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:72)
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:67)
at
org.neo4j.cypher.javacompat.ExecutionEngine.execute(ExecutionEngine.java:66)
at querygraph.BasicStatsQueries.main(BasicStatsQueries.java:54)
Caused by: org.neo4j.kernel.impl.nioneo.store.InvalidRecordException:
PropertyRecord[11853043] not in use at
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:453)
at
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getLightRecord(PropertyStore.java:306)
at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.getPropertyRecordChain(ReadTransaction.java:185)
at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.loadProperties(ReadTransaction.java:215)
at
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.nodeLoadProperties(ReadTransaction.java:239)
at
org.neo4j.kernel.impl.persistence.PersistenceManager.loadNodeProperties(PersistenceManager.java:111)
at
org.neo4j.kernel.impl.core.NodeManager.loadProperties(NodeManager.java:833)
at
org.neo4j.kernel.impl.core.NodeImpl.loadProperties(NodeImpl.java:143)
at
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:596)
... 23 more
There was only one thread (at least, that I started) running the query, and it was all reading, not writing. And though the exception claims that it's temporary, this happens every time I try to query this index. I therefore assume it has to do with the bad shutdown. I have had database corruptions before from forced shutdowns before I implemented code to prevent that, but neo4j was always able to recover the database, though it took a while. It seems that this is much worse.
When I looped through the index manually and added a try-catch, it began returning the error for every node in the index after the one listed above. Does that mean that all these nodes are non-existent, or corrupted? That would mean a significant (huge) loss of data, since there should be about a million nodes in the index. What can I do to recover the database?
I am using 1.9.2 and would love to upgrade to use labels and etc., but I need this database right now for some time-critical work and don't have time to change anything major right now.
Thanks a lot in advance for any help.
sorry that that happened to you. :( What kind of crash was that?
I would recommend making a backup of the database and then deleting and recreating the index.
When you cannot delete the index programmatically you can also delete the directory under
/data/graph.db/index/lucene/node/<indexname> when the database is shut down.
Then afterwards you can programmatically re-index your nodes using
for (Node n : GlobalGraphOperations.at(db).getAllNodes()) {
if (node.hasProperty("key"))
db.index().forNodes("index").add(node,"key",node.getProperty("key"));
}
It would be great if you could get us the database for analytics.
Thanks a lot

Get MYSQL last updated data or lastly inserted data

I have some problem like this.
I am accessing a database which is currently having over 100,000 data in new entry table.
Now I want to write a listener, means if any new record insert to table from somewhere else I have to get a notification.
My question is: What is best and fastest way to do this? because for a day there should have around 500 new data in the new entry table. Is is suitable to check the database every time using a thread?
Im using Java to do this with MySQL.
Please advice me.
I am not sure whether there is any listener that exists for Mysql changes. So it wouldn't be straight forward to get these details.
But there is something called 'The Binary Log' in mysql, which contains “events” that describe database changes such as table creation operations or changes to table data.
So one way to track the changes can be polling these logs. The challenge is that these logs are written in binary format. Mysql provides a utility called mysqlbinlog to process these logs in text format.
Here is one java parser for your rescue, which can read the mysql binary logs:
https://github.com/tangfl/jbinlog
Integrating all this bits and pieces , you may be able to get what you need.
try out this...
numero = stmt.executeUpdate(query, Statement.RETURN_GENERATED_KEYS);
Take a look at the documentation for the JDBC Statement interface.
I used java timer class for as an alternative to this solution. Now it works fine. It checks the database in every 10 seconds and if the condition true, it will execute what I want.

Is there a good patterns for distributed software and one backend database for this problem?

I'm looking for a high level answer, but here are some specifics in case it helps, I'm deploying a J2EE app to a cluster in WebLogic. There's one Oracle database at the backend.
A normal flow of the app is
- users feed data (to be inserted as rows) to the app
- the app waits for the data to reach a certain size and does a batch insert into the database (only 1 commit)
There's a constraint in the database preventing "duplicate" data insertions. If the app gets a constraint violation, it will have to rollback and re-insert one row at a time, so the duplicate rows can be "renamed" and inserted.
Suppose I had 2 running instances of the app. Each of the instances is about to insert 1000 rows. Even if there is only 1 duplicate, one instance will have to rollback and insert rows one by one.
I can easily see that it would be smarter to re-insert the non-conflicting 999 rows as a batch in this instance, but what if I had 3 running apps and the 999 rows also had a chance of duplicates?
So my question is this: is there a design pattern for this kind of situation?
This is a long question, so please let me know where to clarify. Thank you for your time.
EDIT:
The 1000 rows of data is in memory for each instance, but they cannot see the rows of each other. The only way they know if a row is a duplicate is when it's inserted into the database.
And if the current application design doesn't make sense, feel free to suggest better ways of tackling this problem. I would appreciate it very much.
http://www.oracle-developer.net/display.php?id=329
The simplest would be to avoid parallel processing of the same data. For example, your size or time based event could run only on one node or post a massage to a JMS queue, so only one of the nodes would process it (for instance, by using similar duplicate-check, e.g. based on a timestamp of the message/batch).

Do I need to normalize this MySQL db?

I have a classifieds website which uses SOLR to search for whatever ads the user wants to search for... SOLR then returns the ID:s of all the matches found. I then use the ID:s to fetch and display the ads from a MySQL table.
currently I have one huge table containing everything in MySQL.
Sometimes some of the fields are empty because for instance an apartment has no "model" but a car does.
Is this a problem for me if I use SOLR like I do?
Thanks
Ask yourself these questions:
Is your current implementation slow or prone to error?
Are you adding a lot of "hacks" in order to display content or fetch data correctly due to the de-normalization of your database?
In the long run, will you benefit from normalizing the table?
Hope that helps. It all depends on your situation! Personally, I build databases normalized and then de-normalize as needed to keep things speedy.
If you are using SOLR, why don't you just serve complete ad from solr instead of MySQL to save DB time?
One huge table usually is not goog option at all.

Categories

Resources