Spring profiles - risky code in svn - java

We are developing a project with spring framework.
we are using a tomcat cluster and in order to do some really advanced integration tests we added some controllers to the web app that are allowing some risky stuff that must not reach the production.
What we learned is that in order to do so we can use spring profiles and annotate the risky controllers as with the
#profile("Staging")
This annotation makes sure the bean will be created only when the active profile is "Staging".
Call me paranoid but this risky code now resides on our svn and is part of the project code.
It seems that the slieghtest mistake can lead to this code be part of production and allowing risky actions for exploiters.
moreover if some programmer forgets to annotate the code will reach the production for sure.
we all make mistakes.
Is there any mitigation for this issue?

I'll call you a bit paranoid. (wink) Hopefully you also have integration tests in your application, and they usually set up some of the environment - if they ever were to run in a production environment, they would probably screw up your database, send messages to other systems, etc.
You you don't worry about that. Why? Maybe you can use the answer to that to answer how you should package those risky pieces of code.
My suggestion: keep all the risky code in a single module (if you are using a multi-module build). Don't include this module in the production build (you can use maven profiles for that)
Or.. let the code check for itself whether it is allowed to run. Perhaps it can check for the presence of a certain file on the file system that you only create in your test environment.
It depends really on what you worry about.
But it is good to think about it. I know stories where load testing resulted in many orders being placed in an actual (external) order processing system.

The mistake you are speaking about is adding staging to list of active profiles. Yes, it is easy to do this. However it is easy to remove files from file system format the hard disk and turn the electricity off. So, your question really sounds as a kind of paranoia... :)
I think that the problem is not in Spring profiles but in your development methodology. If you are not sure in some code it should not be in production at all. How to achieve this? Move from svn to git. And start using branches. Each task is a branch. Without exceptions. Each task must be tested. So you can deploy every branch you want to staging, test it and when you are sure that the code is ok merge/rebase it to master. Master should be tested as well, and then can be deployed to production.
In this case you do not need profile "staging".

Related

"one version rule" in Java dependency management--is this a good idea

For security reason, we update third-party dependencies frequently.
We use maven as our dependency management tool , but it's still a hard work since we have 100+ projects to update.
1、How can we do this fast and sound?
Sometimes we have to change our code to use the new dependency. Sometimes we don't have enough time to test and cause exceptions in product environment, like NoSuchMethodError.
2、Is One version rule a good idea in java? Have someone done this before?
for example, our Project A depends spring-webmvc 5.3.9 and Project B depends on spring-webmvc 5.2.0. We want both A and B to depend on spring-webmvc 5.3.9. In fact, we want our all projects to depend on the same version.
Thank you
Sometimes we have to change our code to use the new dependency.
Sometimes we don't have enough time to test and cause exceptions in
product environment, like NoSuchMethodError.
This sentence sounds like you have bad practices inside your organisation. You should put in place a solid policy on your testing and deployment (CI/CD) process.
A good practice is to implement a BOM or a parent POM project that will manage all your common dependencies. It is very good when it comes to manage and centralize your librairies versions.
Before any changes goes to the production server, it should be tested
You have to define a process for your tests : unit test > integration tests > end-to-end test
Every new implementation should pass pull/request and review process
Try to implement AGILE workflow in your team
My answer is not a silver bullet approach but I hope it will help you quite a lot. It is clearly an organisation issue that you have in your team or your enterprise. You have to implement Software Development Process.

Feign with Ribbon: reset

We are trying to use Feign + Ribbon in one of our projects. In production code, we do not have a problem, but we have a few in JUnit tests.
We are trying to simulate number of situations (failing services, normal runs, exceptions etc.), hence we need to configure Ribbon in our integration test many times. Unfortunately, we noticed that even when we destroy the Spring context, part of the state still survives probably somewhere in static variables (for example: new tests still connect to balancer from the previous suite).
Is there any recommended way, how to purge the static state of both these tools? (something like Hystrix.reset())
Thanks in advance!
We tried to reset JVM after each suite - it works perfectly, but its not very practical (we must set it up in both Gradle and Idea (as Idea test tunner does not honor this out of the box)). We also tried renaming the service between tests - this works on lets say 99% (it sometimes fails for some reason...).
You should submit a bug to Ribbon if it is the case that there is some static state somewhere. Figure out what minimal code causes the issue, if you are not able to do that though then they won't do anything. In your code base you should do a search for any use of static which is not also final and refactor them as well if any exist.
Furthermore you may find it useful to make strong distinctions between the various different types of tests. It doesn't sound like you are doing a unit test to me. Even though you are just simulating these services, and simulating failures, this sort of test is really an integration test, because you are testing if Ribbon is configured correctly with your own components, which is really an integration test. It would be a unit test if you would test only that your component is configuring Ribbon correctly, not sure if I made sense there haha it's a subtle distinction but it has large implications in your test.
On another note don't dismiss what you have now as necessarily a bad idea. It may be very useful to have some heavy weight integration tests checking the behaviour of Feign if this is a mission critical function, IMO it's a great idea in that case. But it's a heavy weight integration test and should be treated as such. You might want to even use some container magic ect to achieve this sort of test, with services which fail in your various different failure scenarios. This would live in CI and usually developers wouldn't run those guys with each commit unless they were working directly with a piece of functionality concerning integration.

Enforce different settings depending on buildstage

I'm adding unit-tests to an existing codebase, and the application itself retrieves data from a server through REST. The URL to the server is hard-coded in the application.
However, developers are obviously not testing new features, bugs, etc on a live environment, but rather on a development-server. To acomplish this, the developement-build have a different "server-url"-string than the production-build.
During developement a non-production-url should be enforced; and when creating a production build, a production-url should be inforced instead.
I'm looking for advice on how to implement a neat solution for this, since missing to change the url can currently have devastating outcomes.
A maven build script only tests the production-value, and not both. I haven't found any way to make build-specific unit-tests (Technologies used: Java, Git, Git-flow, Maven, JUnit)
Application configuration is an interesting topic. What you've pointed out here as an issue is definitely a very practical need, but even more so, if you need to repackage (and possibly build) between different environments, how do you truly know that what you've got there is the same that was actually tested and verified.
So load the configuration from a resource outside of the application package. Java option to a file on filesystem or a JNDI resource are both good options. You can also have defaults for the development by commiting a config file and reading from there if the Java option is not specified.

A tool to detect broken JAR dependencies on class and method signature level

The problem scienario is as follows (Note: this is not a cross-jar dependency issue, so tools like JarAnalyzer, ClassDep or Tattletale would not help. Thanks).
I have a big project which is compiled into 10 or more jar artifacts. All jars depend on each other and form a dependency hierarchy.
Whenever I need to modify one of the jars, I would check out the relevant source code and the source code for projects that depend on it. Modify the code, compile, repackage the jars. So far so good.
The problem is: I may forget to check one of the dependent projects, because inter-jar dependencies can be quite long, and may change with time. If this happens some jars may go "out-of-sync" and I will eventually get a NoSuchMethodException or a some other class incompatibility issue at run-time, which is what I want to avoid.
The only solution I can think of, the most straighforward one, is to check out all projects, and recompile the bunch. But this takes time, especially if I re-build it every small change. I do have a continuous integration server, that could do this for me, but it's shared with other developers, so seeing if the build breaks is not an option for me.
However, I do have all the jars so hypothetically it should be possible to verify jars which depend on the code that I modified have an inconsistency in method signature, class names, etc. But how could I perform such check?
Has anyone faced a similar problem before? If so, how did you solve it? Any tools or methodologies would be appreciated.
Let me know if you need clarification. Thanks.
EDIT:
I would like to clarify my question a little bit.
The ultimate goal of this task is to check that the changes that I have made will compile against the whole project. I am looking for a tool/technique that would aid me perform such check.
Consider this example:
You have 2 projects: A and B which are deployed as A.jar and B.jar respectively. A depends on B.
You wish to modify B, so you check it out and modify a method signature that A happens to depend on. You can compile B and run all tests by itself without any problems because B itself does not depend on anything. So you happily commit your changes.
In a few hours the complete project integration fails because A could not be compiled!
How do I avoid this?
The kind of tool I am looking for would retrieve A.jar and check that all dependencies in A on the new modified B are still fine. Like a potential compilation error that would happen if I were to recompile A and B sources together.
Another solution, as was suggested by many of you, is to set up a local continuous integration system that would recompile the whole project locally. I don't mind doing this, but I want to avoid doing it inside my workspace. On the other hand, if I check-out all sources to another temporary workspace, then I need to mirror my local changes to the temporary workspace.
This is quite a big issue in my team, as builds break very often because somebody forgot to check out (or open in Eclipse) the right set of projects. I tried persuading people to check-out source and recompile the bunch before commits, but not only it takes time, it needs running quite a few commands so most people just find it too troublesome to do. If the technique is not easy or automated, then it's unusable.
If you do not want to use your shared continuous integration server you should set up a local one on your developer machine where you perform the rebuild processes on change.
I know Jenkins - it is easy to setup (just start) on a local machine and I would advice to run it locally if no one is provided in the IT infrastructure that fits your needs.
Checking signatures is unfortunately not enough. Having the correct signatures does not mean it'll work. It's all about contracts and not just signatures. I mean what happens if the new version of a library has the same method signature, but accepts an ArrayList parameter now in reversed order? You will run into issues - sooner or later. I guess you maybe consider implementing tools like Ivy or Maven:
http://ant.apache.org/ivy/
http://maven.apache.org/
Yes it can be pain to implement it but once you have it it will "guard" your versions forever. You should never run into such an issue. But even those build tools are not 100% accurate. The only proper way of dealing with incompatible libraries, I know you won't like my answer, is extensive regression testing. For this you need bunch of testing tools. There are plenty of them out there: from very basic unit testing (JUnit) to database testing (JDBC Proxy) and UI testing frameworks like SWTBot (depends if your app is a web app or thick client).
Please note if your project gets really huge and you have large amount of dependencies you always not using all of the code there. Trying to check all interfaces and all signatures is way too much. Its not necessary to test it all when your code use lets say 30 % of the library code. What you need is to test what you really use. And this can be only done with extensive regression testing.
I have finally found a whole treasure box of answers at this post. Thanks for help, everyone!
The bounty goes to K. Claszen for the quickest and most input.
I'm also think that just setup local Jenkins is a best idea. What tool you use for build? Maybe you can improve you situation with switching to Maven as build tool? In more smart and don't recompile full project if you don't ask it directly. But switch to in can be HUGE paint in the neck - it hardly depends on how you project organized now...
And about VCS- exist Mercurial/SVN bridge - so you can use local Mercurial for you development ....
check this link: https://www.mercurial-scm.org/wiki/WorkingWithSubversion
There is a solution, jarjar, which allows to have different versions of the same library to be included multiple times in the dependency graph.
I use IntelliJ, not Eclipse, so maybe my answer is too IDE-specific. But in IntelliJ, I would simply include the modules from B into A, so that when I make changes to A, it breaks B immediately when compiling in the IDE. Modules can belong to multiple projects, so this is not anything like duplication, it's just adding references in the IDE to modules in other projects.

Is there a way to prevent Maven Test from rebuilding the database?

I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data.
How do I prevent the rebuilding without telling maven to skip testing?
The best I could figure was to assign the script to operate in a test database (line breaks added for readability):
mvn test
-Ddbunit.schema=<database>test
-Djdbc.url=jdbc:mysql://localhost/<database>test?
createDatabaseIfNotExist=true&
useUnicode=true&characterEncoding=utf-8
I can't help but think there must be a better way.
I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database.
======= update =======
Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...
Update: I guess that DBUnit does the rebuilding of the DB because it is told to do so in the test setup method. If you change your setup method, you can eliminate the DB rebuild. Of course, you should do it so that you get the DB reset when you need it, and omit it when you don't. My first bet would be to use a system property to control this. You can set the property on the command line the same way you already do with jdbc.url et al. Then in the setup method you add an if to test for that property and do the DB reset if it is set.
A test database, completely separated from your production DB is definitely the best choice if you can have it. You can even use e.g. Derby, an in-memory DB which can run embedded within the JVM. But in case you absolutely can't have a separate DB, use at least a separate test schema inside that DB.
In this scenario I would recommend you put your DB connection parameters into profiles within your pom, the default being the test DB, and a separate profile to contain the production settings. This way it can never happen that you accidentally run your tests against the production DB.
In general, however, it is also important to understand that tests run against a DB are not really unit tests in the strict sense, rather integration tests. If you have an existing set of such tests, fine, use them as much as you can. However, you should try to move towards adding more real unit tests, which test only a small, isolated portion of your code at once (a method or class at most), ideally self contained (need no DB, net, config files etc.) so they can run fast - this is a very important point. If you have 5000 unit tests and each takes only 5 seconds to run, that totals up to almost 7 hours, so you obviously won't run them very often. If a test takes only 5 milliseconds, you get the results in less than half a minute, so you can afford to run all your tests before you commit your latest change - many times a day. That makes a huge difference in the speed of feedback you get from the tests.
Hope this helps.
We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database.
Maven doesn't do anything with databases by itself, your code does. In any case, it's very unusual to run tests (which are not unit tests) against a production database.
How do I prevent the rebuilding without telling maven to skip testing?
Hard to say without more details (you're not showing anything) but profiles might be a way to go.
Unit tests, by definition, only operate on a single component in the system. You should not be attempting to write unit tests which integrate with any external services (web, DB, etc.). The solution I have to this is to use a good mocking framework to stub out the behaviour of any dependencies your components have. This encourages good interface APIs since most mocking frameworks work best with simple interfaces. It would be best to create a Repository pattern interface for any interactions with your DB and then mock out the impl any time you are testing a class that interacts with it. You can then functionally test your Repository impl separately. This also has the added benefit of keeping your unit tests fast enough to remain part of your CI so that your feedback cycle is as fast as possible.

Categories

Resources