Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I have a java swing application that connects to MySQL database and performs transactions perfectly alright. There are one-up numbers as well that are generated for transactions. Data does get committed most of the times without any issue. But there have been instances where, in some of the cases, the previously committed data gets wiped off from the database altogether and the one up numbers also rollback. For example, if I have done invoices with one up invoice numbers from 1 to 200, the next morning when I check the sales report, it shows transactions with ID from 1 to 100 only. The rest of the data go missing. But since I have printed invoice copies of all invoices from 1 to 200, I am certain that the transactions have indeed taken place. Is there anything in MySQL that I need to watch out for?? Will upgrading to MySQL version 8 help? I feared there is some kind of malware in the machine, so I have even tightened the antivirus but that hasn't helped. I have set a very strong MySQL DB password as well. Nothing seems to have helped prevent this from happening.
For example, if I have done invoices with one up invoice numbers from 1 to 200, the next morning when I check the sales report, it shows transactions with ID from 1 to 100 only. The rest of the data go missing. But since I have printed invoice copies of all invoices from 1 to 200, I am certain that the transactions have indeed taken place.
What you have stated doesn't prove that the transaction committed. One possibility is that the invoice was printed during the transaction; i.e. before you committed it.
We can't eliminate this possibility without examining your code. And there could be any number of other possibilities like that.
Is there anything in MySQL that I need to watch out for?
I am aware of nothing in MySQL that would cause spontaneous roll-back of transactions that have been committed.
Will upgrading to MySQL version 8 help?
It is highly unlikely that it will help. More to the point, you won't really know for sure that it has made any difference ... if you don't know the real cause.
Here's some possible explanations. There are probably others.
Bugs in your code; e.g. see above.
Some scheduled task (that you might have forgotten about or not know about) has bugs in it.
Something is restoring an older version of your database.
A previous employee / developer left a "time bomb" in your code.
A hacker (or a previous employee or a current employee) has access to the system and is "messing with you".
Malware / viruses (though it is unlikely that generic malware would cause this kind of behavior). And note that "tightening" your AV may not help if you have already been infected.
Some strange MySQL bug that I've never heard of.
I would suggest things like:
checking all accounts for the system are secure, and close any that shouldn't be there anymore,
checking the system access logs, etcetera for signs of unauthorized access or access at unexpected times
use MySQL Enterprise Audit - https://dev.mysql.com/doc/refman/8.0/en/audit-log.html
implement something that (for example) takes periodic snapshots of certain tables that you can then use for later analysis.
Related
I have a situation like in a Java application like , "if an there is an entry in a table and simultaneously there is a delete request also how will we handle such a scenario".
Could anybody suggest me on how to deal with issues like these which could work on small as well as large applications?
I think the question is how the UI/UX should be handled when such scenario occurs. Besides the concurrency issue described in question, there can be other scenarios like user 1 opens edit person page, in the mean time, user2 deletes that record from another login. What should happen when user1 tries to save the record?
You should probably return an error message to the user stating the details of the error(record deleted, updated by someone else etc.).
Your question is very wide and such is the aswer.
I will narrow the problem a bit, by assuming you are using spring(boot). If so then It is very easy to answer.
Use the #Transactional annotation above the methods that contain the logic to either save or delete. And include required libraries ofc.
With the methods annontated in such a way, the spring(boot) application will guarantee that both operations will occur in the order that is required to maintain a consistent database.
If an error occurs, you can handle this in higher levels of your application or just show an error to the user.
I have a problem with duplicate records arriving in our database via a Java web service, and I think it's to do with Oracle processing threads.
Using an iPhone app we built, users add bird observations to a new site they visit on holiday. They create three records at "New Site A" (for example). The iPhone packages each of these three records into separate JSON strings containing the same date and location details.
On Upload, the web service iterates through each JSON string.
Iteration/Observation 1. It checks the database to see if the site exists, and if not, creates a new site and adds the observation into a hanging table.
Iteration/Obs 2. The site should now exists in the database, but it isn't found by the database site check in Iteration 1, and a second new site is created.
Iteration/Obs 3. The check for existing site NOW WORKS, and the third observation is attached to one of the existing sites. So the web service and database code does work.
The web service commits at the end of each iteration.
Is the reason for the second iteration not finding the new site in the database due to delays in Oracle commit after it's been called by the Java, so that it's already started processing iteration 2 by the time iteration 1 is truly complete, OR is it possible that Oracle is running each iteration on a separate thread?
One solution we thought about was to use Thread.sleep(1000) in the web service, but I'd rather not penalize the iPhone users.
Thanks for any help you can provide.
Iain
Sounds like a race condition to me. Probably your observation 1 and 2 are arriving very close to each other, so that 1 is still processing when 2 arrives. Oracle is ACID-compliant, meaning your transaction for observation 2 cannot see the changes made in transaction one, unless this one was completed before transaction two started.
If you need a check-then-create functionality, you'd best synchronize this at a single point in your back end.
Also, add a constraint in your DB to avoid the duplication at all costs.
It's not an Oracle problem; Thread.sleep would be a poor solution, especially since you don't know root cause.
Your description is confusing. Are the three JSON strings sent in one HTTP request? Does the order matter, or does processing any of them first set up the new location for the ones that follow?
What's a "hanging table"?
Is this a parent-child relation between location and observation? So the unit of work is to INSERT a new location into the parent table followed by three observations in the child table that refer back to the parent?
I think it's a problem with your queries and how they're written. I can promise you that Oracle is fast enough for this trivial problem. If it can handle NASDAQ transaction rates, it can handle your site.
I'd write your DAO for Observation this way:
public interface ObservationDao {
void saveOrUpdate(Observation observation);
}
Keep all the logic inside the DAO. Test it outside the servlet and put it aside. Once you have it working you can concentrate on the web app.
In troubleshooting operations issues, I'm finding it difficult at times to diagnose a problem without more details. I see from timestamps that a merchant record changed on a particular date, for example, and the processing of transactions on the prior day is called into question. Logging what changed could help quickly rule out possibilities.
Are there any utilities out there that do that sort of comparison automatically? I'd like it to be able to do something like:
String logDelta=SomeLibrary.describeChanges(bean1, bean2);
I'd be hoping for a one-liner with something like:
"lastName{'Onassis','Kennedy Onassis'}, favoriteNumber{16,50}"
This is called an audit trail or an audit log and it's generally done in the database using triggers or stored procedures to make a copy of the row in the database being changed with the name of the user and the timestamp. It's very common to do this for compliance reasons. I haven't seen any packages that manage it for you because it's usually very tightly coupled to the database design.. you don't necessarily want a copy of every single row or every field, and it can become very expensive to do this in a highly transactional environment.
Try googling 'audit trail'
I need one help from you guys regarding JDBC performance optimization. One of our pojo is using jdbc to connect to a oracle database and retrieve the records. Basically the records are email addresses basing upon which emails will be sent to the users. The problem here is the performance. This process happens every weekend and the records are very huge in number, around 100k.
The performance is very slow and it worries us a lot. Only 1000 records seem to be fetched from the database every 1 hour, which means that it will take 100 hours for this process to complete (which is very bad). Please help me on this.
The database server and the java process are in two different remote servers. We have used rs_email.setFetchSize(1000); hoping that it would make any difference but no change at all.
The same query executed on server takes 0.35 seconds to complete. Any quick suggestion would of great help to us.
Thanks,
Aamer.
First look at your queries. Analyze them. See if the SQL could be made more efficient (ie, ask the database for what you want, not for what you don't want -- makes a big difference). Also check to see if there are indexes on any fields in your where and join clauses. Indexes make a big difference. But it can't be just any indexes. They have to be good indexes (ie, that the fields that make up the index provide enough uniqueness for the database to retrieve things appropriately). Work with your DBA on this. Look for either high run time against the db or check for queries with high CPU usage (even if the queries run sub-second). These are the thing that can kill your database.
Also from a code perspective, check to see if you are opening and closing your connections or if you are re-using them. Can make a big difference too.
It would help to post your code, queries, table layouts, and any indexes you have.
Use log4jdbc to get the real sql for fetching single record. Then check speed and plan for that sql. You may need a proper index or even db defragmentation.
Not sure about the Oracle driver, but I do know that the MySQL driver supports two different results retrieval methods: "stream" and "wait until you've got it all".
The streaming method lets you start process the results the moment you've got the first row returned from the query, whereas the other method retrieves the entire resultset before you can start work on it. In cases where you deal with huge recordsets, this often leads to memory exceptions, or slow performance because java hit the "memory roof" and the garbage collector can't throw away "used" records like it can in the streaming mode.
The streaming mode doesn't let you navigate/scroll the resultset the way the "normal"/"wait until you've got it all" mode...
Anyway, not sure if this is of any help but it might be worth checking out.
My answer to your question, in summary is:
1. Check network
2. Check SQL
3. Check Java code.
It sounds very slow. First thing to check would be to see if you have a slow network. You can do this pretty quickly by just pinging the database server. Or run the database server on the same machine as your JVMM. If it is not the network, get an explain plan for your SQL and ensure you are not doing table scans when you don't need to be. If it is not the network or the SQL, then it's time to check your Java code. Are you doing anything like blocking when you shouldn't be?
I have to go through a database and modify it according to a logic. The problem looks something like this. I have a history table in my database and I have to modify.
Before modifying anything I have to look at whether an object (which has several rows in the history table) had a certain state, say 4 or 9. If it had state 4 or 9 then I have to check the rows between the currently found row and the next state 4 or 9 row. If such a row (between those states) has a specific value in a specific column then I do something in the next row. I hope this is simple enough to give you an idea. I have to do this check for all the objects. Keep in mind that any object can be modified anywhere in its life cycle (of course until it reaches a final state).
I am using a SQL Sever 2005 and Hibernate. AFAIK I can not do such a complicated check in Transact SQL! So what would you recommend for me to do? So far I have been thinking on doing it as JUnit test. This would have the advantage of having Hibernate to help me do the modifications and I would have Java for lists and other data structures I might need and don't exist in SQL. If I am doing it as a JUnit test I am not loosing my mapping files!
I am curious what approaches would you use?
I think you should be able to use cursors to manage the complicated checks in SQL Server. You didn't mention how frequently you need to do this, but if this is a one-time thing, you can either do it in Java or SQL Server, depending on your comfort level.
If this check needs to be applied on every CRUD operation, perhaps database trigger is the way to go. If the logic may change frequently over the time, I would much rather writing the checks in Hibernate assuming no one will hit the database directly.