Let me rephrase so as to put the pointy end of the question more directly: Neo4j worked with Roo at one time (maybe not perfectly, but some simple examples published in at least one book apparently did work). Why can't I download the version of Roo that the author used and duplicate the author's results?
A shallow answer, of course, is that software dependencies not part of the Roo distribution have changed.
A deeper question is, Why does this happen? That is, why can't I download the versions of the software providing those dependencies to Roo that were current at the time of the author's writing and expect to be able to duplicate his results?
It's at this point that I'm a little stuck and I can't see why that should be. I don't seem to have any way to be able to identify what those versions might have been. It seems like that ought to be a critical part of Roo's configuration management. But, come to think of it, I don't recall this sort of record-keeping to be part of typical practice except where the RPM package manager is involved. Now, maybe my perceptions as to this point are flat out wrong. But if they're not doesn't the usual way of doing open-source development need to be upgraded at this point? Or maybe I'm so completely wrong that you can turn back Roo's clock, so to speak. If that's the case, will someone please tell me how to turn it back so that Neo4j works as well as it once did (However well or badly that was I don't really care. I just want to replicate results.)
Is that a better way of expressing and attacking the problem? I'm trying very hard to prop up one of the couple of books written to date about Roo. Frankly I'd like to see the book's author(s) or publisher wade in and help me or rebut me, because the book's still being sold. If the ongoing example is as badly broken as I think it seems to me that it would be wrong to continue selling the book or, at least, selling it without a clear warning to the reader. O'Reilly published the book in question and I have a high subjective opinion of their business ethics--a high enough opinion that I wonder if I'm getting everything wrong.
Generally when you're wrong, you can depend on 100 people to tell you so (plus another 10-20 to tell you you're wrong on points that you did get right and another 5-10 that seize on some basically irrelevant point and contradict you apparently ust fof the fun ot it, without in any way moving the discussion forward--a cooperative concept they seem incapable of grasping let alone following). But I, and others who've asked essentially the same question, hear nothing but the crickets. Chirp-chirp???
mv (sorry: I wasn't immediately able to find the markup syntax used to notate the ids of members) asked a question about the future status of Roo's support for Neo4j, which appears to have foundered. A related question puzzles--and frustrates me--mightily. Neo4j was supported under Roo 1.1.4 but when I try code that apparently worked when 1.1.4 was current (from Josh Long's book Getting Started with Roo), the code fails in exactly the way it fails under 1.2.5 (and the upcoming release, 1.2.6). In other words, it appears that support for Neo4j was removed retroactively, so to speak.
My question follows as a generalization of that observation: Under what technical circumstances (I don't wish to consider possible legal reasons) would it be good (i.e., sound, practical, necessary. &c.) to retroactively alter the behavior of a released version of a software product?"
Currently, I'm finding this decision as respects Roo inconvenient and so I think I may overlook good reasons for such a decision. Please note, however, that my question doesn't specifically pertain to Roo. For one thing, I don't relish a discussion in which participants work to make the Roo team look bad. For another, I'm interested in the general case, not merely the particular case of Roo. Actually it seems to me at the moment that the absence of retroactive inconsistencies in behavior is a necessary condition for robust systems. I mean, for example, at exactly what time did the inconsistency begin? Was it during or after the stated duration of viability of the release in question. Probably I'm wearing my Chicken Little hat but right now it seems to me as though retroactive inconsistency has "Humpty Dumpty" written all over it.
That being said, I suppose I'm sneaking in a second question, but I would very much appreciate being told that my premise as respects Roo is inaccurate; that is, Neo4j can somehow be used under some appropriately old released version of Roo. In that case, I would also be immensely curious to know how this might be accomplished. Roo doesn't require any set up configuration so there appears to be no opportunity for a configuration tweak. Actually the only stated requirements are a JDK and Maven under Linux, OS X, or Windows. But, the addon command apparently queries a database of some sort. Perhaps that is an unstated dependency responsible for retroactively inconsistent Roo behavior.
Having snuck in a second question, I'm finding it difficult resisting temptation to go for a third question. If I were to succumb, the question would be this: How, in the particular case of Roo, is it possible (assuming that the release code has not been surreptitiously changed) that the behavior of an old release has been changed? It seems to me that the answer must lie in Roo's dependencies. But, assuming none of the dependencies has retroactively changed its behavior, can Roo do so without actually modifying the released code? It seems to me that it cannot, in which case I'd be exceptionally eager to know which dependency (assuming there is only one) has retroactively changed its behavior, and why. But. I think I may yet find the resources to master even the quite strong temptation to pose that question. :-)
Long question,..
Running an old release should duplicate it's behavior, but there might be inconsistencies due to various circumstances. It will be hard to pinpoint those without some more eleborate description of the problem and a stacktrace (if available).
You state that you believe the software dependencies of the roo-distribution might have changed, but this should not be the case: maven, acting as your dependency manager, should take care of that as long as there are no SNAPSHOT dependendencies in your pom.xml (or rather, in the dependency tree).
But there are other reasons the behavior might be different now. It could well be the version of the JDK you use. These also should be backward compatible but on wikipedia I saw that spring roo doesn't support Java 8, for example.
Then there is your operating system, but I believe it would indicate some sort of bug if that would be the issue now.
Finally I would look at third-party addons of spring roo. Unfortunately I am not familiar with them, but it seems to me that a third party add-on is downloaded with some command that doesn't necessarily ask for the 'correct', compatible version of this addon.
I hope this answer to the title-question helps you. your second and third question did help me in formulating this answer, but generally it would be a good idea to make separate posts for the snuck-in second and third question.
Related
I have an application that is undergoing massive rework, and I've been exploring different options - chug along 'as is', redo the project in a different framework or platform, etc.
When I really think about it, here are 3 major things I really dislike about java:
Server start/stops when modifying controllers or other classes. Dynamic languages are a huge win over Java here.
Hibernate, Lazyloading exceptions (especially those that occur in asynchronous service calls or during Jackson JSON marshalling) and ORM bloat in general. Hibernate, all by itself, is responsible for slow integration start up times and insanely slow application start up times.
Java stupidity - inconsistent class-loading problems when running your app inside of your IDE compared to Tomcat. Granted once you iron out these issues, you most likely won't see them again. Even still, most of these are actually caused by Hibernate since it insists on a specific Antlr version and so on.
After thinking about the problem... I could solve or at least improve the situation in all 3 of these areas if I just got rid of Hibernate.
Have any of you reworked a 50+ entity java application to use mongo or couch or similar database? What was the experience like? Do you recommend it? How long did it take you assuming you have some pretty great unit/integration tests? Does the idea sound better than it really is?
My application would actually benefit in many areas if I could store documents. It would actually open up some very cool and interesting features for this application. However, I do like being able to create dynamic queries for complex searches... and I'm told that Couch can't do those.
I'm really green when it comes to NoSQL databases, so any advice on migrating (or not migrating) a big java/spring project would be really helpful. Also, if this is a good idea, what books would you recommend I pick up to get me up to speed and really make use of them for this application in the best way possible?
Thanks
In any way, your rant doesn't just cover problems with the previously made (legacy) decision for Hibernate but also with your development as a programmer in general.
This is how I would do it, should a similar project be dropped in my lap and in dire need of refactoring or improvement.
It depends on the stage in your software's lifetime and the time pressure involved if you should make big changes or stick with smaller ones. Nevertheless, migrating in increments seems to be your best option in the long term.
Keeping the application written in Java for the short term seems wise, a major rewrite in another language will definitely break acceptance and integration tests.
Like suggested by Joseph, make the step from Hibernate to JPA. It shouldn't cost too much time. And from there you can switch the back-end to some other way of storage. Work towards a way of seperating concerns. Pick whatever concept seems best, some prefer MVC while others might opt for CQRS and still others adore another style of segmentation/seperation.
Since the JVM supports many languages, you can always switch to any of those or at least partially implement functionality in more dynamic languages. This will solve part of the problem where you keep bumping into the "stupidity" of Java, while still retaining the excellent optimizations of current JVMs at runtime.
In addition, you might want to set up automatic integration tests... since the application will hopefully never be run from your IDE, these tests will give you honest results.
Side note: I never trust my IDE to get dependencies right if the IDE has capabilities to inject its own libraries into my build or runtime path.
So to recap in short: small steps; lose Hibernate and go more abstract to JPA; if Java becomes stupid, then gradually switch to a clever language. Your primary concern should be to restructure the code base without losing functionality, keeping in mind to have an open design which will make adding interesting and cool features easier later on.
Well, much depends on things like "what exactly are the pain points with Hibernate?" (I know, you gave three examples...)
But those aren't core issues over the long haul. What you're running into is the nature of a compiled language vs. a dynamic one; at runtime, it works out better for you (as Java is faster and more scalable than the dynamic languages, based on my not-quite-exhaustive tests), but at development time, it's less amenable to just hacking crap together and hoping it works.
NoSQL isn't going to fix things, although document stores could, but there's a migration step you're going to have to go through.
Important: I work for a vendor in this space, which explains my experience in the area, as well as the bias in the next paragraph:
You're focusing on open source projects, I suppose, although what I would suggest is using a commercial product: GigaSpaces (http://gigaspaces.com). There's a community edition, that would allow you to migrate JPA-based java objects to a document model (via the SpaceDynamicProperties annotation); you could use JPA for the code you've written and slowly migrate to a fully document-oriented model at your convenience, plus complex queries aren't an issue.
All of those points are usually causing problems due to incompetence, rather than hibernate or java being problematic:
apart from structural modifications (adding fields or methods), all changes in the java code are hot-swapped in debug mode, so that you can save & test (without any redeploy).
the LazyInitializationException is a problem for hibernate-beginners only. There are many and clear solutions to it, and you'll find them with a simple google or SO search. And you can always set your collections to fetch=FetchType.EAGER. Or you can use Hibernate.initialize(..) to initialize lazy collections.
It is entirely normal for a library to require a specific version of another library (the opposite would be suspicious and wrong). If you keep your classpath clean (for example by using maven or ivy), you won't have any classloading issues. I have never had.
Now, I will provide an alternative. spring-data is a new portfolio project by springsource, that allows you to use your entities for a bunch of NoSQL stores.
yesterday i went to English Class and met new friend, he said with me about the worked he did (still now i have studied in school).
In his company, the customers have
many request for their project, if you
use framework but not understand all
component or like that, you would meet
problem with your source code and you
didn't fix it because it built by
another one. And Cusomters paid money
for you to developed their project,
you must completed it by yourself, and
if occur error you would to fix it.
But if you use framework you can fix
or not it's 50/50 percent for you.
I wonder about his said. Can you help me the best way me must choice? Framework or not?
We have many kinds customers and we must work with some technologies such as struts/ hibernate/Spring/ or so on...if not use framework the time we complete project so long but if use it i don't believe all component of it i can understand.
Thank you for your suggest!
• Should I use existing frameworks in my projects?
Yes, in general you should. The creators of the frameworks have put large amounts of work into them to make them good, and many other people use the frameworks, too. That means that the code is well-tested in practice. When you write your own code, it will be tested by just you and your team.
• What happens if there are bugs in the framework, how could I possibly fix them?
Good question, I don't know an answer right now. Most probably you would write some own code to work around the problem, like a small wrapper class.
• Do I have to understand the complete frameworks before I can use them?
No, you don't. Some frameworks are large and cover each and every aspect of software development. In most cases you only have to learn the things you really want to do, and some more. But not every detail.
• When I use a framework, is that cheating, since my customer wants me to develop software?
No, it isn't. Your customer doesn't really want you to do much work, he rather wants his projects to be done and finished. That means if you can do less work and profit from other's work, that's usually fine.
• We must work with third-party products like Structs/Hibernate/Spring, and if we are forced to implement them ourselves, the projects will take very long.
You really don't want to implement everything that Spring, Hibernate and Struts have already solved. So use these frameworks and be glad that someone else did the work. It's many man years that you will save.
There are many factors to consider:
Is the framework commercial? If so, does the framework have a responsive support team with the ability to provide demos, documentation, consulting, "work-abouts" and hot-fixes? Can you purchase the source code to make any tweaks you need? (Is it worth it to may extra to have access to the source and can you redistribute a modified copy?)
Is the framework "open source"? If so, does the framework have a responsive forum or mailing list that can provide answers to problems? Are there paid consultants or contractors? Is the documentation good? Is the framework popular and is it being maintained? Can you apply hot-fixes as needed?
How much "time" is required to learn the framework? Do special conventions need to be used? Does using the framework cause some lock-in that will be incompatible with future requirements?
Etc, etc.
This all leads to: Does using the framework ultimately make work more productive?
I think it depends on the size of the project. If you're working in a small project probably it's a nonsense to use a framework, because you're going to be less productive.
Instead if you're working in a big project the framework can help you a lot.
For example, in the case of Hibernate, if you're working in a project with three or four objects/tables, maybe it's a nonsense use it, because probably it's much easier to work with JDBC, and even the software will run much faster. But if you're in a project with docens of objects/tables working with JDBC can be a big headache, and hibernate helps you a lot.
The time you loose in the configuration of the framework is small compared to the big benefit in the simplification of the development.
According to the possible bugs in the framework, is important to use a framework with a good support and a good community which can help you to solve your problems.
Also if you use an open source framework you can try to solve the bug, add a new feature or modify an existing one to match with your project needs.
I have just created a mid-sized web-application using Java, a custom MVC framework, javascript. My code will be reviewed before it's put in the productions servers (internal use).
The primary objective of building this app was to solve a small problem for internal use and understand the custom made MVC framework used by my employer. So, my app has gone through MANY iterations, feature changes and additions.
So, bottom line, the code is very very dirty and this is my first "product level" Java app.
What are your suggestions, what are some basic checks/refractoring I should do before the code review?
I am thinking about:
Java best practices (conventions)
Make the code simple to understand for the developer who will maintain it. (won't be me)
I noticed, I have created some unnecessary objects and used hashmaps/arraylists where could have easily used some other Data structure and achieved the solution. So, is that worth changing?
Update
Your Code Sucks and I Hate You: The Social Dynamics of Code Reviews
If you did not already, (assuming you use an IDE like eclipse)
get plugins checkstyle and findbugs
go through their configuration and tune to your style
run them on your code
resolve all issues reported
you can also tune the compiler warning setting of eclipse itself and possibly make them more strict in what is reported.
Look at code structure:
get plugin jdepend
investigate your package structure
Code against interfaces (Map, List, Set) instead of implementation classes (HashMap, ArrayList, TreeSet)
Complete your Javadoc and make check it is up to date after all refactorings.
Add JUnit tests; if you have no time left to test the whole application, at least create a test for every bug you find and solve from now on. This helps "growing" a test set as you go.
Next time design and build your application with the end goal in sight. Always assume that the next guy having to maintain your code will know how to find you :-)
Unit tests, and they should be automated as part of your build. You should already have these, but if not, do it now. It will definitely make the refactoring easier, as well improving your general confidence in the code (and the guy who will be maintaining it).
Logging.
One of the more overlooked things is the importance of logging. You need to have a decent logging methodology put in place. Even though this is an internal app, make sure that the basic logs can help regular users find issues and provide more detailed logging so that you (the developer) would know where to go.
Comment your code, explain why it's doing what it's doing and what assumptions have been made.
Try to reduce the amount of mutating state.
Try to remove any singletons you may have.
I work on a project that uses multiple open source Java libraries. When upgrades to those libraries come out, we tend to follow a conservative strategy:
if it ain't broke, don't fix it
if it doesn't have new features we want, ignore it
We follow this strategy because we usually don't have time to put in the new library and thoroughly test the overall application. (Like many software development teams we're always behind schedule on features we promised months ago.)
But, I sometimes wonder if this strategy is wise given that some performance improvements and a large number of bug fixes usually come with library upgrades. (i.e. "Who knows, maybe things will work better in a way we don't foresee...")
What criteria do you use when you make these types of decisions in your project?
Important: Avoid Technical Debt.
"If it ain't broke, don't upgrade" is a crazy policy that leads to software so broken that no one can fix it.
Rash, untested changes are a bad idea, but not as bad as accumulating technical debt because it appears cheaper in the short run.
Get a "nightly build" process going so you can continuously test all changes -- yours as well as the packages on which you depend.
Until you have a continuous integration process, you can do quarterly major releases that include infrastructure upgrades.
Avoid Technical Debt.
I've learned enough lessons to do the following:
Check the library's change list. What did they fix? Do I care? If there isn't a change list, then the library isn't used in my project.
What are people posting about on the Library's forum? Are there a rash of posts starting shortly after release pointing out obvious problems?
Along the same vein as number 2, don't upgrade immediately. EVERYONE has a bad release. I don't intend to be the first to get bit with that little bug. (anymore that is). This doesn't mean wait 6 months either. Within the first month of release you should know the downsides.
When I decide to go ahead with an upgrade; test, test test. Here automated testing is extremely important.
EDIT: I wanted to add one more item which is at least as important, and maybe more so than the others.
What breaking changes were introduced in this release? In other words, is the library going off in a different direction? If the library is deprecating or replacing functionality you will want to stay on top of that.
One approach is to bring the open source libraries that you use under your own source code control. Then periodically merge the upstream changes into your next release branch, or sooner if they are security fixes, and run your automated tests.
In other words, use the same criteria to decide whether to use upstream changes as you do for release cycles on code you write in house. Consider the open source developers to be part of your virtual development team. This is really the case anyway, it's just a matter of whether you choose to recognise it as part of your development practices.
While you don't want to upgrade just because there's a new version, there's another consideration, which is availability of the old version. I've run into that problem trying to build open source projects.
I usually assume that ignoring a new version of a library (coz' it doesn't have any interesting features or improvements) is a mistake, because one day you'll find out that this version is necessary for the migration to the next version which you might want to upgrade to.
So my advice is to review carefully what has changed in the new version, and consider whether the changes requires a lot of testing, or little.
If a lot of testing are required, it is best to upgrade to the newer library at the next release (major version) of your software (like when moving from v8.0 to v8.5). When this happens, I guess there are other major modifications as well, so a lot of testing is done.
I prefer not to let the versions lag too far behind on dependant libraries.
Up to a year is ok for most libraries unless security or performance issues are known.
Libraries with known security issues are a must for refreshing.
I periodically download the latest version of each library and run my apps unit tests using them.
If they pass, I use them in our development and integration environments for a while and push to QA when I'm satisfied they don't suck.
The above procedure assumes the API hasn't changed significantly. All bets are off if I need to refactor existing code just to use a newer library version. (e.g. Axis 1x vs. 2x) Then I would need to get management involved to make the decision to allocate resources. Such a change would typically be differed until a major revision of the legacy code is planned.
Some important questions:
How widely used is the library? (If it's widely used, bugs will be found and eliminated more quickly)
How actively developed is it?
Is the documentation very clear?
Have there been major changes, minor ones, or just internal changes?
Does the upgrade break backwards compatibility? (Will you have to change any of your code?)
Unless the upgrade looks bad according to the above criteria, it's better to go with it, and if you have any problems, revert to the old version.
A co worker of mine asked me to review some of my code and he sent me a diff file. I'm not new to diffs or version control in general but the diff file was very difficult to read because of the changes he made. Specifically, he used the "extract method" feature and reordered some methods. Conceptually, very easy to understand but looking at the diff, it was very hard to tell what he had done. It was much easier for me to checkout the previous revision and use Eclipse's "compare" feature, but it was still quite clunky.
Is there any version control system that stores metadata related to refactoring. Of course, it would be IDE and Programming Language specific, but we all use Eclipse and Java! Perhaps there might be some standard on which IDEs and version control implementations can play nicely?
Eclipse can export refactoring history (see 3.2 release notes as well). You could then view the refactoring changes via preview in Eclipse.
I don't know of compare tools that do a good job when the file has been rearranged. In general, this is a bad idea because of this type of problem. All too often people do it to simply meet their own style, which is a bad, bad reason to change code. It can effectively destroy the history, just like reformatting the entire file, and should never be done unless necessary (i.e. it is already a mess and unreadable).
The other problem is that working code will likely get broken because of someones style preferences. If it ain't broken, don't fix it!
I asked a similar question a while ago and never did get a satisfactory answer. I'll be watching your question to see what people come up with.
For your particular situation, it might be best to review the latest version of the file, using the diff as a guide. That's what I have been doing in my situation too.
The Refactoring History feature is new to me, but I like the way it sounds. For a less tool-specific method, I like sending patch files. The person reviewing just applies the patch and reviews the results, and then they can revert to the version in version control when they're done.