I have been assigned with testing a mongodb database from a java backend. I got told that I had to create a database completely utilizing a script for this task.
But I have difficulties understanding the benefit of creating a database from scratch with a script, instead of having a permanent test database. Where I imagine data should be inserted on startup and cleaned on teardown in both cases.
Why is it beneficial from a testing perspective to creating and deleting a database when testing?
Sometimes tests fail and therefore it may happen that the teardown phase will never be reached.
Furthermore, deleting a database is the fastest and most effective way to clean it, although perhaps not the most efficient way to do so. But it guarantees that you do not forget something in your cleanup routine.
And in particular for performance tests it is important that the database is in exact the same state for each run, otherwise the run times cannot be compared with each other: an improvement in a consecutive run could have been caused just because tablespaces were already increased or similar things, and not because the code optimisation worked …
most of the times test means a predefined environment and a expected reaction of that environment against our assumed states. so for verifying it we need a pure automated and repeatable process as much as possible without interference of manual setup or configuration.
In software development process we try to consider as many as possible test cases for QA of product. when we talk about too many test cases each one should be isolated from the others. if it's not isolated well the result may varies in each execution round and eventually invalid testing process.
They need not be. However:
You lose portability.
You don't have a known start state for your test.
Related
I 'm having a project with more than 50 liquibase migrations.
I have tables: Currencies, Countries ... And they are filling in the migrations right now.
The problem is that for each integration test where context is running I have to do all of my 50 migrations. It takes time. And as you know spring is not the fastest framework.
What can I do? The time Gradle is spending for passing all tests is 10 minutes.
Of course, you may say it is the monolith, yes, it is but the customer doesn't want to split up logic because the average level of the team is quite low.
How can I speed up my integration tests?
Depending on the kind of migrations, they may not be an actual performance issue. I'm looking at about 130 migrations in one project at the moment, and while they do take a certain amount of time, it's nothing compared to the time it takes to set up and tear down the test context. Starting from a clean slate I'd expect it to shave off maybe 10-20 seconds at best.
It may make sense to restart for other reasons though. For example we have changesets from 2015 that are rolled back in other changesets, so they're just extra clutter. The documentation isn't very specific about it, but you can remove all changesets and start from the beginning in the middle of a project. However you need to be careful that you then know what the correct state of the database is (without any new changesets you might make). As mentioned in the docs, it usually means the state of the production database.
But remember, this does not guarantee a significant speed-up.
I have two test environments. My application is performing much worst on the seconds one. I suspect that this because the first one system is using database which runs on better hardware (more CPU, faster connection). I would like to verify my claims somehow. Are there any tools, which would help me with that? Should it helpful, I am using Oracle 11g and my app is using Hibernate to connect to the database.
Mind you, I am not interested in profiling my schema. I would like to compare how fast is the same database (meaning schema + data) on two different machines.
If you are interested, why I suspect that database is the problem: I profiled my application during tests on those two environments. During the second test environment methods responsible for connecting to database (namely org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery()) are using much more of the CPU time.
To answer the question: I believe you'd use JMeter to profile the two environments, and get comprehensive data out of the tests you run. VisualVM will also be helpful, but that depends on the kind of data you need, and how you need to present (or analyze) it.
But as for the general problem, is the data on the two databases exactly the same? Because if this is not the case, some possibilities are open - your transactions might be depending on data that is locked by another process (therefore, you'd need to look at your transactions and the transaction isolation they use).
When i start a JVM in debug mode things naturally slow down.
Is there a way to state that i am interested in only debugging a single application instead of the 15 (making up a number here) applications that run on this JVM.
An approach that facilitates this might make things faster particularly when we already know from the logs and other trace facilities that the likely issue with a single application
Appreciate thoughts and comments
Thanks
Manglu
I am going to make a lot of assumptions here, especially as your question is missing a lot of contextual information.
Is there a way to state that i am interested in only debugging a single application instead of the 15 (making up a number here) applications that run on this JVM.
Firstly, I will assume that you are attempting to do this in production. If so, step back and think what could go wrong. You might be putting a single breakpoint, but that will queue up all the requests arriving at that breakpoint, and by doing so you've thrown any SLA requirements out of the window. And, if your application is handling any sensitive data, you must have seen something that you were not supposed to be seeing.
Secondly, even if you were doing this on a shared development or testing environment this is a bad idea. Especially if are unsure of what you are looking for. If you are hunting a synchronization bug, then this is possibly the wrong way to do so; other threads will obviously be sharing data that you are reading and make it less likely to find the culprit.
The best alternative to this is to switch on trace logging in your application. This will, of course be useless, unless you have embedded the appropriate logger calls in your application (especially to trace method arguments and return values). With trace logs at your disposal, you should be able to create an integration or unit test that will reproduce the exact conditions of failure on your local developer installation; this is where you ought to be doing your debugging. Sometimes, even a functional test will suffice.
There is no faster approach in general, as it is simply not applicable to all situations. It is possible for you to establish a selected number of breakpoints in any of the other environments, but it simply isn't worth the trouble, unless you know that only your requests are being intercepted by the debuggee process.
All,
Recently I developed a code that supposedly is a thread-safe class. Now the reason I have said 'supposedly' is because even after using the sync'ed blocks, immutable data structures and concurrent classes, I was not able to test the code for some cases because of the thread scheduling environment of JVM. i.e. I only had test cases on paper but could not replicate the same test environment. Is there any specific guidelines or something the experienced members over here who can share about how to test a multi-threaded environment.
First thing is, you can't ensure only with testing that your class is fully thread-safe. Whatever tests you run on it, you still need to have your code reviewed by as many experienced eyes as you can get, to detect subtle concurrency issues.
That said, you can devise specific test scenarios to try to cover all possible inter-thread timing scenarios, as you did. For ideas on this (and for designing thread-safe classes in general), it is recommended to read Java Concurrency in Practice.
Moreover, you can run stress tests, executing many threads simultaneously over an extended period of time. The number of threads should be way over the reasonable limit to make sure that thread contention happens often - this raises the chances of potential concurrency bugs to manifest over time.
Also, another thing I would recomend is for you to use code coverage measuring tools and set a high standar as your goal. For example, set a high goal for modified condition/decision coverage.
We use GroboUtils to create multi threaded tests.
If you have code that you plan to test in order to make it reliable, then make it single threaded.
Threading should be reserved for code that either doesn't particularly need to work, or is simple enough to be statically analysed and proven correct without testing.
The root of our problem is Singletons. But Singletons are hard to break and in the meantime we have a lot of unit tests that use Singletons without being careful to completely clear them in the tearDown() method. I figure that a good way to detect tests like these is to look for memory leaks. If the memory used after tearDown() and System.gc() is more than when the test started, either the test leaked or more classes were loaded by the classloader. Is there any way to automatically detect this sort of problem?
Could you introduce a subclass, between TestCase and your individual test classes, that did the cleanup? Then subclasses would only be responsible for calling super.teardown() - and only those that had a teardown() of their own.
I completely agree with other posters that monitoring the memory usage isn't a viable way to track this - System.gc() is not going to behave as you expect, or with enough precision to achieve your goal.
You're going to need a tool that lets you inspect the reference graph and show allocation call stacks.
I've used OptimizeIt from Borland and JProfiler from ej-technologies, both with success (a quick google reveals that OptimizeIt may now be dead.)
There's also the possiblity of using JVMTI to throw together a better monitor for this specific problem.
Edit: Wierd, but as I was reviewing this answer, I got a phone call from Embarcadero, who has apparently purchased OptimizeIt, done some updating and are now marketing under the name J Optimizer.
Just a thought: if you have two empty tests run right after one another, the second one should not have a different memory used after teardown(). If it does, you (probably) have a leak somewhere in your setup()/teardown() system.
I don't think this is a good approach. System.gc() is not guaranteed to fully clean up any unused objects as you think it will.
If your root problem is that you have unit tests which end up using global data (singletons) without properly cleaning them up, you should attack the root problem: these unit tests. It shouldn't be too too hard to find all tests that aren't using tearDown(), or to find all tests that use a particular singleton.
If your Singleton's are only intended to be initialized one time, you could have code that checks for reinitialization and logs the current stack when it detects that. Then if you check the stack, you will see which test got the ball rolling, and you can check the JUnit logs to see what the test run right before that was.
In terms of solving this problem more thoroughly, instead of detecting it I would recommend you have a singleton initializer that remembers what it initialized, and has one teardown method that tears down everything it initialized. That way tests can be made to only initialize via this class, and only has to do one thing in teardown.
I also think Carl Manaster's suggestion is a good one, but if you were using JUnit4, then you could have a teardown method that runs in the superclass without having to remember to call super. Unless you use the JUnit3 GUI, JUnit4 should be a drop in replacement. The only thing is to take advantage of its new features you have to migrate the whole test, you can't have both live in the same class. So tests that interact with these singletons would have to be migrated one whole test class at a time.
You could use the Eclipse Memory Analyzer to automate analyzing heap dumps taken after each test or probably better after all tests. MAT can find memory leaks fairly automatically.