I have heard that tighly coupled code is hard to unit test. I dont understand how? Can somebody explain with example.
Tight coupling means that you use implementations instead of interfaces, reducing the array of options when it comes to creating mock implementations and other testing utilities. It may helped by using mocking frameworks (like Mockito for Android) but should nonetheless be avoided, as it is a bad practice.
However, this is probably the least problematic aspect of highly coupled code. It is generally discouraged, because it limits refactoring and/or expanding possibilities. You should always keep some level of abstraction in your code, to be able to easily implement new modules and change current implementations. But do not overdo it, because programs which have lots on interface-implementation exclusive pairs are very redundant and hard to debug.
In general, you should have a look at some open-source projects and see how those are tested (for Android, check out the Google I/O app for example) and how the testing approach is reflected in the code. It all comes with experience and there is no better way to learn it than by analyzing how pros do it :-)
Related
This questions looks weird or may be pointless at all.
Recently, I was reviewing some code in java language where the developer used one of the methods from a unit testing library "org.easytesting".
Example: He was using a method "Strings.isNullOrEmpty" of "Strings" class from this library to verify the non-nullability of some values and was using other classes/methods at other places in the code.
I know a library is developed to make our life easier(basic principles of Java) and can be used anywhere/everywhere, but is there a recommendation about using a unit test library in live development code ?
I know using it won't led to a compatibility issue because unit test cases are executed always.
I searched at many places may be I'm missing a good word to search.
It could be argued that a unit-test library is just a library, but I don't see it like this.
First, the purpose of a unit-test library is to be used in code that is not production code. Which means, that certain quality criteria relevant for production code might not be met. For example, if there is a bug in a unit-test library it is annoying, but normally does not harm the production code. Or, performance may not be quite as relevant, thread safety and so on. I don't want to say that the popular unit-testing frameworks are of a bad quality. But, the developers of these libraries have all the right in the world to take design decisions based on the assumption that their code will not be part of production code.
Secondly, using a library should be done in accordance to the philosophy of the respective library. For example, if you use a certain gui library, this brings implications on the way event handling is done in your application. And, unit-testing frameworks come under the assumption that the framework is in control of the executable (from the test runner). Therefore, all functions from that library may depend on the test runner being set up and running. If some function from the library does not have this dependency, that is an implementation detail which may change with a new version of the library.
Finally, code should communicate intent. That includes includes (pun intended). It was not the intent of the developer to write unit-testing code, but that would be what the inclusion of a unit-testing library would communicate.
Considering that there are other, production-oriented libraries out there which check if a string is empty or null, any use of the testing framework's method should be treated as a strong code smell and identified in code reviews.
In the future, this testing library may introduce a change in other parts which make running it in production either prohibitively expensive or insecure, as the code running through this empty or null check could be leveraged as an area of attack. Worse, if your team wants to pivot away from this testing framework, you now have to change production code which many teams would be reluctant to do if all they're doing is changing test dependencies.
Without looking specifically at test libraries, here's an answer to this more general question:
Should you use the general-programming utility classes that are provided by any framework or library? (For example, should you use the StringUtils/CollectionUtils/etc provided by a web/UI/logging/ORM/... framework).
The arguments by the other answers are still mostly valid even in this more general case. Here are some more:
These utilities have been developed specifically for use by the framework. They likely only contain very specific methods with narrow use cases (those that are actually required by the framework) and nothing more. They might be optimized for specific internal behavior of the framework and not for general purposes.
Framework developers may introduce breaking changes without much thought, since they don't expect many users outside of their framework.
It would be alarming to see imports from e.g. a UI library in your back end code, it looks like code smell.
In modular projects, you wouldn't want to introduce additional dependencies to the framework's utilities (again, a dependency to an UI framework from you back end modules is code smell). It would also add a bunch of unnecessary transitive dependencies that may aren't even compatible with other dependencies.
So I would say generally you shouldn't use the utilities of frameworks, except in those places where you are actually working with those frameworks. But even then you should consider using Apache Commons or Guava etc. for consistency.
Now you could also replace the terms UI and back end with test and production in the last two points. Test frameworks are also special in the sense that you usually don't include them as run-time dependency. So you would need to change the scope of the test dependencies to make them available at run-time. You should avoid this for the reasons given in the last point.
This morning in a work interview they asked me if I get a some legacy code with no tests at all, and I need to modify something, how would I do it to be completely sure that I break nothing.
My answer was, first of all I would create a test for the part of the code I need to modify, then once the test is passing, I would do the modification of the code ensuring that the test is still passing. So basically doing it from a test driven development approach.
I did not see the the person interviewing me really ok with myanswer so, I would like to have your opinions on the subjects, there are better approaches for that?
Is it valid my answer?
Thank you very much
Your answer is correct, but maybe too simplified.
In the real world, it is usually not that easy. Especially for legacy systems where a lot of code is too much coupled, there are bad abstractions, inefficient language constructs, etc. It is especially challenging if there are no other tests and that happens.
Also sometimes it is just not that easy to mock out the dependencies if they are hardcoded. Then it depends on the mocking framework capabilities if it allows mocking out final classes, static methods, etc.
Maybe good thing to mention is few patterns from the the "Working Effectively with Legacy Code" book.
When mocking dependent services for writing unit test cases for any enterprise-grade java service, I find setting up the data for the unit test cases a huge pain. Most of the times, this is the single most compelling reason for developers to not write unit test cases and rather write integration style test cases. If the service is dependent on couple of other services (which depend on their respective DAO's) and a DAO of its own, generating the when-thenReturn clauses for a reasonably nested object becomes quite an effort and developers are seen to be taking the easy route and loading the entire spring context and sourcing their data from the direct sources which may not always give the data that can traverse all required code paths. With this in the background, a colleague of mine suggested that why not run a sample integration test, and using aspects, capture all of the relevant data points and serialize it to an XML representation which may be used for materializing test data for the unit test cases. To our pleasant surprise we found a framework called TestDataCaptureJ on github which was very similar to this. It used aspects to capture the data points and it generated the java code to create the objects.
The motivation stated on the site seemed very apt and I was wondering if there are any other alternatives that can give similar features. Also, it would be great if the experts can critique this overall approach.
Also, the project is about 2 yrs old and has a few bugs which we we had to fix and are hoping to give it back as a mavenized github fork. Just checking to ensure that there is no other similar initiative from one of the well-known stables as well.
Thanks in advance!
I have two critiques to that approach... and please bear in mind that my knowledge of your context is almost nil, which means that what I suggest here might not work for you.
I've only once experienced a problem like the one you mentioned, and it was a symptom that there was too much coupling between the objects because the responsbilities were way to broad. Since then I use a Domain-Driven Design approach and I haven't had this problem again.
I prefer to use Test-Data Builders to create test data. This approach allows me to have a template of what I want to build, and just replace the bits I'm interested in the test. If you decide to go this way, I strongly suggest you to use a tiny library called Make-It-Easy that simplifies the creation of these builders.
And two suggestions
If you have some time, I suggest you to
Watch a presetation called The Deep Synergy Between Testability and Good Design by Michael Feathers - Part of the talk is about something very similar to what you're experiencing.
Read the book Growing Object-Orieted Systems, Guided by Tests (aka GOOS), it has all sorts of insights about how to write simple, amazing, testable code.
I'm developing a Java Based server, with NIO multiplex and I started to see a lot of frameworks... I don't understand if these frameworks makes the life easier only or has also an increment of performance ( for example netty )
No framework can increase performance of what's underneath it. In the case of NIO I've come around to the view that it already is a framework itself. I've reviewed a couple of NIO frameworks such as Mina, and indeed wrote one myself, but my own conclusion is that this is largely wasted effort, that ultimately gets in the way one way or another. All you need is a well-written select loop and the appropriate data structures.
I think the core point is that they make life easier/get you productive faster. They may be more or less performant compared to each other, or to your own code (no reason to think that if you coded it from scratch you would get better performance the first try - of course ultimately you own it so you can optimize it to death if you want and have the time).
Ultimately they are all using the Java NIO framework and classes, and the only way to outperform those is to do your own JNI - assuming you succeeded - it is hard stuff, really a specialty of its own within programming.
It depends on what you're trying to do. NIO frameworks are useful because they provide you an abstraction of NIO core action. Although, they force you to use several design patterns you may not be comfortable with.
If you think you adapt yourself to those design patterns you should probably use a framework. It will have less bugs, you will have less work to do and ultimately you won't see where all of the action happens. You just have to focus on what you are trying to achieve.
It has some additional overhead in comparison to a "domestic" solution but it is negligible.
It really depends on your level of knowledge with the java.nio API. If you're not sure on how things work then you should probably use a 3rd party API. If you know how things work and are capable of writing code without a 3rd party API then you should definitely use your own code without any strings attached. You can achieve better performance without extra things (3rd party API) going on.
I like to live by the KISS principle.
We have an agile enterprise application built on JSP and Servlet without any design strategy.
This application was built in early 2002 considering 1000 users. After 2002, we received lots of requests from the marketing partners.
Currently, the application has lots of spaghetti code with lots of Ifs and elses. One class has more than 20,000 lines of code with a huge body of functions without abstraction.
Now, we need to support billions of records,
what we need to do immediately and gradually?
We have to refactor the application?
Which framework, we need to use?
How the usage of the framework will be helpful to the end users?
How to convince the leaders to do the refactoring?
How to gain the faster response time as compare to the current system?
Here is how I would approach this if I had appropriate company resources at my disposal (yeah right):
Get a good QA process going, with automated regression testing set up before making significant changes. I don't care how good you are, you can't put a system like that under unit test and reasonably control for regressions.
Map out interdependencies, see how much an individual class can be tested as a unit.
How do you eat an elephant? One bite at a time. Take a given piece of required functionality (preferably something around the increase load requirements) and refactor the parts of the class or classes that can be worked on in isolation.
Learn how do 3 above by reading Working Effectively with Legacy Code.
Convenient way to refactor the application.
There are no "convenient" or "easy" ways to refactor an existing codebase, especially if the codebase looks like spaghetti.
... what we need to do immediately and gradually?
That's impossible to answer without understanding your system's current architecture.
We have to refactor the application?
On the one hand, the fact that you have a lot of poorly designed / maintained code would suggest that it needs some refactoring work.
However, it is not clear that it will be sufficient. It could be that a complete rewrite would be a better idea ... especially if you need to scale up by many orders of magnitude.
Which framework, we need to use?
Impossible to answer without details for your application.
How the usage of the framework will be helpful to the end users?
It might reduce response times. It might improve reliability. It might allow more online users simultaneously. It might do none of the above.
Using a framework won't magically fix a problem of bad design.
How to convince the leaders to do the refactoring?
You need to convince them that the project is going to give a good return on investment (ROV). You / they also need to consider the alternatives:
what happens if you / they do nothing, or
is a complete rewrite likely to give a better outcome.
How to gain the faster response time as compare to the current system?
Impossible to answer without understanding why the current system is slow.
The bottom line is that you probably need someone from outside your immediate group (e.g. an external consultant) to do a detailed review your current system and report on your options for fixing it. It sounds like your management don't trust your recommendations.
These are big, big questions. Too broad for one answer really.
My best advice is this: start small, if you can. Refactor piece by piece. And most importantly, before touching the code, write automated tests against the current codebase, so you can be relatively sure you haven't broken anything when you do refactor it.
It may not be possible to write these tests, as the code may not be testable in it's current format. But you should still make this one of your main goals.
By definition refactoring shouldn't show any difference to the users. It's only to help developers work on the code. It sounds like you want to do a more extensive rewrite to modernize the application. Moving to something like JSF will make life a lot easier for developers and will give you access to web component libraries to improve the user experience.
It is a question which needs a lengthy answer. To start with I would suggest that the application is tested well and is working as per the specification. This means there are enough unit, integration and functional tests. The functional tests also have to be automated. Once these are in place, a step by step refactoring can take place. Do you have enough tests to start?