Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
A project I was working on has finished, so I've been moved on to some new tasking at my employer. The previous work was very agile, small team, progress over procedure, etc etc.
So anyway, the new project I'm on - I find myself confused on how to deal with management. They have no real understanding of object oriented programing , current technologies or methodologies. They seem to fear change and just recently we moved to the latest JRE
We do these code reviews and I have to listen to "gray beards" saying how much better it is in ADA or how they used to do things in C. But then when they try to code review - they lack even the most basic understanding of OOP design and dev. They focus more on the style of the code; spacing; method names; etc.
One of the senior level people say we should be writing our own logger instead of using log4j because of one negative review of log4j in an academic PDF written ages ago.
How do I deal with this? How can I explain to them that their design is faulty or that they are really behind the times, without coming across as a jerk. I've only been with this organization for about a year - so I don't know how much credibility I will have.
Regarding the code review, I would say make them happy. Name and space things the way they like. Focus your time on better design, of course, and enjoy the ADA reminiscing, it still can give you some background of where things are today and how they got there.
In other words, don't take that part too seriously. Worry about what is important to getting the job done. The job in this case is making those that matter feel you made a positive contribution to the project.
Regarding Log4j, I would just suggest a different framework. Either the built in JDK logging (can't complain about that, it is a built in API) or something like SLF, which lets you plug in whatever you want (including your own, I guess, which you can then throw away and replace with something real when they realize it was a mistake, and you only have to change the classpath).
Now there will be times where it is important. In that case, make it sound as much as possible that it is their idea. For example, on the logging, state that there are many logging frameworks out there that represent a lot of lines of code, and you were wondering if there are other ways to leverage that work for this project, and then let them "figure out" the solution.
There will be times when you have to push something as your idea - there will be no other way. In that case stick to the evidence, martial allies as much as possible by keeping relationships with those that do have influence in good standing, and realize that every battle you fight, you lose position, even if you win (perhaps especially if you win).
I'd recommend approaching your concerns as 'suggestions'. Make a suggestion and ask for their opinion on it, that way they feel as if they are still in control even though you've planted the seed and are directing the conversation.
Regardless of how long you have been with an organization, you are there and you are there for a reason (they hired you for your input). Find your voice and how to best approach your team members with suggestions and/or concerns. This is a crucial part of being a team member and will increase your value.
Get a good formatter and create you method names this which they cannot complain about then your discussion can move onto real issues.
Some people cannot get over these little details during reviews, so you need to make it a non-issue.
Your work has to gain credibility before they will listen to you. So yes, do as others have recommended, and make sure the unimportant formatting laws are adhered to. But also do such high quality work that they can't ignore or marginalize you. Try to guide them in ways that makes them think the ideas are coming from them.
I disagree with the recommendation of another logging framework besides log4j. Citing an old review, without any kind of personal experience, should not win the day here.
However, there might be a way to turn this to your advantage. If you agree and recommend the logging built into the JDK or Apache Commons logging, you'll find that both are quite similar and can actually use log4j as their underlying implementation.
If your adversary isn't paying much attention, you may win points for giving in and avoiding a bike shed argument and STILL get what you want.
Respect your elders I say! :)
Really though, just remember that a lot of these gray beards were probably programming while you were in diapers. That doesn't make them experts in the latest technologies, but it should at least earn your respect. And sometimes if you can find a way to look past all the hemming and hawing and "back in my day" stuff, you can pick up some pearls of wisdom from those old dogs!
Now from the programming perspective, looks like Yishai has it right. It should be pretty easy to conform to the coding styles they want, and once you've made them happy you can run with the code the way you want.
And if you have to present a counter opinion, back it up. If you want to use something like log4j, talk about SPECIFIC projects in your past where you've used it and it worked fine, and offer to help anyone get past any problems they have with it, etc. etc.
Remember, while you look at the old gray beards as not knowing how to do some cool new programming, they probably see you as a young whipper snappers with a lot of crazy ideas out to change the world. An ounce of patience will get you a pound of respect.
I am an old gray beard but I abandonded COBOL 35 years ago and code in dotNET C# and have kept up with the young wippersnappers and try to mentor them too. With that said I see a lot of managers and programmers that are still in the dark ages like VB6 and cannot accept web farms, web services some of these gray beards and young wippersnappers cannot normalize a database table to 3NF let alone code nTier, WCF or have a clue. Worse yet some of the managers are 30 years behind and rely on the VB6 at best and a flat file using Access97.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
It's only worth it if it solves a real problem, note, that problem could be
I want to learn something better
I need it to go faster to reduce power requirements in my colo.
I need to hire more people and the talent pool for [insert language here]
is too small.
Insert innumerable real problems here.
Python and Java are both suitable for production. Write it in whatever makes it easiest to solve the problems you and or your team are facing and if you want to preempt some problems make sure you've done your homework. Plenty of projects have died because they chose C/C++ believing performance was going to be a major factor without thinking about the extra effort involved in using these language well.
You mentioned maintainability. You're likely to require more code to rewrite it in Java and there's a direct correlation between Bugs and LOC. It's up for debate which one is easier to maintain. I'm sure both camps believe theirs is.
Of the two which one do you enjoy coding with the most?
The crucial question is this one: "Java's static typing including seems to make it less prone to errors on a larger scale". The crucial word here is "seems." Sure, Java will help you catch this one particular type of error. But how important is that, and what do you have to pay for it? The overhead imposed by Java's type system means that you have to write more lines of code, which means reduced productivity. I've used both and I have no doubt that I'm more productive in Python. I have found that type-related bugs in Python are generally easy to find and fix. Keep in mind that in a professional environment you're not going to ship code without testing it pretty carefully. The bottom line for a programming environment is productivity - usable functionality per unit of effort, not the number of bugs you found and fixed during development.
My advice: if you have a working project written in Python, don't rewrite it unless you're certain there's a benefit.
Java is inherently object oriented. Alternatively python is procedural.
As far as the ability of the language to handle large projects you can make do with either.
As far as producing more usable products I would recommend java script as opposed to java because of its viability in the browser. By embedding your js in a publicly hosted website you allow people with no coding knowledge to run your project seamlessly in the browser.
Further more all the GUI design features of HTML are available at your disposal.
That said any language has it's ups and downs and anything I've said here is simply my perception.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am new to this. I have been learning java for a few months and I am working on my first project, myself.
What confuses me is that I write a few lines, run it, sometimes it works, a lot of times it doesnt. Then I have to google it, find a solution, incorporate it. Then I try to write more code, sometimes it works and often I have to repeat the whole process , or ask here at stackoverflow.
Now probably because i am new , with practice I won't need to look up things as often, HOWEVER, as my skills get better, the projects I take on will get harder as well,so I am guessing that I will STILL have to keep looking things up. Like I learnt how to send data over TCP/IP, then I had to look up how to encrypt it, then how to encrypt with strong encryption, then store that data in a derby datase , each time I have to continually look things up.
So the question:
Part 1:Does this process ever end? Surely it must, otherwise how do software projects get finished. Programmers couldn't be looking things up all the time? Do I just need loads and loads of coding hours?
Part 2:It always seems to take longer than I anticipate. If I am using the GUI editor to design a form and think I can do it in two days ( as an example) something ALWAYS goes wrong and it takes 2-3 times as long. If I am so lousy, I won't even be able to hold a job, I'd be fired in 3 days :(
Any help from experienced people greatly appreciated.thanks
Le Prince De Dhump
As you get more experienced, you will have to look things up less and less. However, I can tell you that you will never stop looking things up altogether. I've been using Java for about 8 years now and still use Google to look up code snippets and answers to questions all the time. There's no shame in it, it's just part of being a programmer.
Part1: I write programs for about 25 jars and still use google or search in books to find solutions [Its often faster than come up with something new of your own]. More practice and better knowledge of the libraries, data structures and algorithms reduce the amount of searching on the web.
Part2: It is a common problem in software development to underestimate the time needed for the implementation. You need a lot of practice and review what was the reason it took longer than expected. It seams that software developers tend to blank out all the little problems that will eat so much time while implementation.
My advice: Split your problem in little peaces and estimate the needed time separately. If possible compare the problems to one you have solved in the past and remember how many time they took. And at the end of the project review which estimations where totally wrong and try to figure out what was the reason. That helps for future estimations.
Out there are so many books which try to help you to solve the time estimation problem. Reading some of them may also give clues what the pitfalls are.
I think it is a more general issue regarding with the concept of skill acquisition (so perhaps more suitable for "the workplace" site).
There are two good bools about the subject, "Mastery" by Robert Greene and "Outliers" by Malcolm Gladwell, about this subject and the 10,000 hours necessary needed to acquire a disciple.
A short answer: this is the normal: these things that you are doing have not yet been "wired" to your brain and that takes time and attempts. After some time doing those tasks, you will be able to "remember" faster, or do them on subconsciously. Additionally you will be able to build associations, so that your mind will be able to recognise more patterns (or differences) etc. Each thing that gets "in" your head will be accomplished with less effort therefore you will be able to think deeper and achieve more at the same time.
In many languages, for me specifically, Java and C++, there is an massive standard library. Many classic problems in computer science, search, sorting, hashing etc etc... are implemented in this library. My question is, are there any benefits of say implementing one's own algorithm versus simply using the library's version? Are there any particular instances were this would be true?
I only ask because in school a huge deal of time is spent on say sorting, however in my actual code I have found no reason to utilize this knowledge when people have already implemented and optimized a sorting algorithm in both Java and C++.
EDIT: I discussed this at length with a professor I know and I posted his response, can anyone think of more to add to it?
Most of the time, the stock library functions will be more performant than anything you'll custom code.
If you have a highly specific (as opposed to a generic) problem, you may find a performance gain by coding a specialized function, but as a developer you should make a conscious effort to not "reinvent the wheel."
Sorting is a good example to consider. If you know nothing whatsoever about the data to be sorted, except how to compare elements, then the standard sort algorithms fare well. In this situation, in C++, the STL sort will do fine.
But sometimes you know more about your data. For example, if your data consists of uniformly distributed numbers, a radix sort can be much faster. But radix sort is 'invasive' in the sense that it needs to know more about your data than simply whether one number is bigger than another. That makes it harder to write a generic interface that can be shared by everyone. So STL lacks radix sort and for this case you can do better by writing your own code.
In general, standard libraries contain very fast code for very general problems. If you have a specific problem, you can in many cases do better than the library. Of course, you may eventually come across a complex problem which is not solved by a library, in which case the knowledge you have gained from studying solutions to solved problems could prove invaluable.
In college, or school, or if learning as a recreational programmer, you will be (or in my strident opinion, you should be) encouraged to implement a subset of these things yourself. Why? To learn. Tackling the implementation of an important already invented wheel (the B-Tree) for me was one of the most formative experiences of my time in college.
Sure I would agree that as a developer you should make an effort not to reinvent the wheel, but when learning through formative experiences, different rules apply. I read somewhere else on this forum that to use something at abstraction level N, it is a very good idea to have a working knowledge of abstraction level N-1, and be familiar with level N-2. I would agree. In addition to being formative, it prepares you for the day when you do encounter a problem when the stock libraries are not a good fit. Believe me this can happen in your 50 year career. If you are learning fundamentals such as data structures, where the end goal is not the completeness of your finished product but, instead, self improvement, it is time well spent to "re-invent the wheel".
Is pre-algebra/algebra/trigonometry/calculus worth learning?
I can't tell if this is a "am I wasting my time/money in school" aimed question or if this is a sincere question of if your own version is going to be better.
As for wasting your time/money in school: If all you want to do is take pot shots at developing a useful application, then you're absolutely wasting your time by learning about these already-implemented algorithms -- you just need to kludge something together that works good 'nuff.
On the other hand if you're trying to make something that really matters, needs to be fast, and needs to be the right tool for the right job -- well, then it often doesn't exist already and you'll be back at some site like Stack Overflow asking first or second year computer science questions because you're not familiar enough with existing techniques to roll your own variations.
Depending on my job, I've been on both sides. Do I need to develop it fast, or does it have to work well? For fast application programming, it's stock functions galore unless there's a performance or functionality hindrance I absolutely must resolve. For professional game programming it has to run blazing fast. That's when the real knowledge kicks into memory management, IO access optimization, computational geometry, low level and algorithmic optimization, and all sorts of clever fun. And it's rarely ever a stock implementation that gets the job done.
And did I learn most of that in school? No, because already knew most of it, but the degrees helped without a doubt. On the other hand you don't know most of it (otherwise you wouldn't be asking), so yes, in short: It is worthwhile.
Some specific examples:
If you ever want to make truly amazing games, live and breath algorithms so you can code what other people can't. If you want to make fun games that aren't particularly amazing, use stock code and focus on design. It's limiting, but it's faster development.
If you want to program embedded devices (a rather large market), often stock code just won't do. Often there's a code or data memory constraint that the library implementations won't satisfy.
If you need serious server performance from modest hardware, stock code won't do. (See this Slashdot entry.)
If you ever want to do any interesting phone development the resource crunch requires you to get clever, even often for "boring" applications. (User experience is everything, and the stock sort function on a large section of data is often just too slow.)
Often the libraries you're restricted to using don't do what you need. (For example, C# doesn't have a "stable" sort method. I run into this annoyance all the time and have since written my own solution.)
If you're dealing with large amounts of data (most businesses have it these days) you'll end up running into situations where an interface is too slow and needs some clever workarounds, often involving good use of custom data structures.
Those libraries offer you tested implementations that work well, so the rule of thumb is to use those implementations. If you have a very particular/complex problem where you can use some domain knowledge you have a case were you will need to implement your own version of an algorithm.
I remember an example Bill Pugh gave in his programming languages class where they analyzed the performance of a complex application and they realized a faulty custom implementation of a sorting algorithm by a programmer (that code was used many times in the real runs of the application) was responsible for 90% performance decrease!
After discussing this at length with professor of Computer Science, here were his opinions:
Reasons to Use Libraries
1. You are writing code with a deadline.
There is no sense in hampering your ability to complete a project in a quick and timely manner. That's why libraries are written after all, to save time and avoid "reinventing the wheel"
2. If you want to optimize your code fully.
Chances are the team of incredibly talented people who wrote the algorithm in Java or C++'s or whoever's library did a far better job at optimizing their algorithm for that language in however long it took them than you can possibly do in an hour or two. Or four.
3. You've already done previously solved this problem.
If you have already solved this problem and have a good complete understanding of how it is designed you don't need to labor over a complex solution as you don't stand to gain much benefit.
That being said, there are still many reasons to make your own solution.
Reasons to Do It Yourself
1. A fundamental understanding of problem solving techniques and algorithms are completely necessary once you reach a problem that is better optimized by a non-library solution.
If you have a highly specified problem, such things often come up when working with networking or gaming or such. It becomes invaluable to be able to spot situations in which a specific algorithm will outperform the libraries version.
2. Having a very good understanding of algorithms and their design and use makes you much more valuable in the work place.
Any halfway decent programmer can write a function to compare two objects and then toss them into a library function, however the one that is able to spot a situation and ultimately improve the programs functionality and speed is going to be looked upon well by management.
3. Having the concept of how to do something is often just as, if not more so, valuable than being able to do it.
With an outstanding knowledge of Java's libraries and how to use them, chances are you can field any problem in java with reasonable success. However when you get hired to work in erlang you're going to have some rough times ahead. Where if you had known how and not merely what Java's libraries did, you could move those ideas to any language.
4. We as programmers are never truly satisfied with merely having something "work".
Chances are that you have an itch to understand why things work. It was this curiosity that probably drove you to this area of study. Don't deny this curiosity! Encourage it and learn to your hearts content.
5. Finally, there is a huge feeling of success and accomplishment that comes with creating your own personal way of sorting or hashing etc.
Just imagine how cool your friends will see you when you proclaim that you can find the shortest path between 2 vertices in n log(n) time! On a serious note, it is very rewarding to know that you are completely capable of understanding and choosing an optimum solution based on knowledge. Not what some library gives you.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I'm working for small company, which operates in the automation industry.
The boss hired me because he wants to sell/give some desktop applications to his
current costumers.
He imposes me to use the Netbeans Platform (a generic desktop application framework).
A software engineer friend of his advised him to choose this framework.
At the moment I created 3 desktop applications with Netbeans Platform.
I like Netbeans Platfom. I really take advantage of modularity, Window System and Lookup.
Unfortunately I'm frustrated to known that I can do the same works with Python and PyQt in a fraction of time.
I've already illustrated to my boss the main advantages of Python, but he doesn't like the
idea to use a language that he never heared of it.
I'm the only programmer who codes desktop applications. And except the framework imposition, I'm free to use whatever I want.
I'm looking for good motivations to convince him to leave Netbeans Platform for Python/PyQt.
P.S: My english is bad, sorry.
If your selling skills are not working in discussion format I highly suggest that you document it. Some managers/bosses really respond well to this.
Make a matrix of all the metrics that you yourself use to grade the two frameworks (I leave the level of objectivity to you there: for example if objective it should analyze the cost of transition and the loss of institutional experience; but it might not be high).
Finally, send it by e-mail and viola you have:
made a report/analysis of the situation providing options for improvement
this shows that you are thinking towards future and that you show initiative
EDIT:
You can also ask your boss to show your analysis to his friend if he trust his friend that much, but ask for a written counter-analysis so that you can address the critique.
It is a good thing to do it openly and document the decision process well, since ultimately, if your suggestion is accepted, you will share responsibility for the decision.
The problem is that development time is usually nothing compared to maintenance. Who cares if it takes two days instead of four if the app has a 1-5 year lifetime?
You'll have to convince him that if you get hit by a truck or leave the company (perhaps to work for somebody who uses Python exclusively) that he won't be left in the lurch with a bunch of applications that nobody else knows and can't maintain or upgrade.
The basic problem here is that your non-technical boss is getting conflicting advice from you and from the friend who advised him in the first place. If you want him to take your advice seriously you need to prove that your advice is likely to be trustworthy. And that will only come with taking the lead and being successful with significant projects in the company. Right now, you haven't earned his confidence.
The other thing to consider is how your preferences mesh with the company's objectives. For instance, you want to be able to write code fast. But the boss / the company needs code that is going to be reliable and maintainable ... if you decide to take another position. He doesn't want to be left in the awkward situation where the company is contractually committed to deliver code that doesn't really work properly, and the only person who understands it has left.
First, results speak for themselves: if you can piece together another version of one of your applications in pyqt, and tell him how long it took, it might be motivation enough.
Or, next time you're starting a project, you could prototype four or five different versions of the interface in pyqt in the morning, ask his feedback after lunch, and then say, "if I keep going on these, it'll be done in two days; if I redo this in netbeans, it'll be done in four."
And as for the "never heard of it", feel free to point out that Google uses python extensively, and even hired many of the python developers.
Some people will tell you to try to convince your boss verbally. Others will tell you to document the time savings you think you can make. My opinion is that you just go ahead and do it. Do it in your own time if you strongly believe its in your best interests.
I'm yet to meet a software manager who turned down a working piece of software when it comes in on time and under budget. This is by far the best method of persuasion I've used in my career. Its also a great way to show you have initiative. Just be prepared to work for free if it doesnt work out.
Have you emphasised the point of the lower development time. Any person that doesn't want a shorter turn around time is an idiot. This is the only main issue i can think for the change. Or what you could do is develop it on the side and when you have errors say this is what i have been doing in my spare time(have a working copy written in python).
Perhaphs showing him
a)Time spent in developing in Python and Java
b)lines of code in Python and Java
with these two metrics maybe you can make your case stronger
I would guess a lot, in terms of risk management, would depend on the separation/isolation of the various softwares you develop, and their life cycle.
If you don't need to further a central set of libraries, or have (or can develop) Python bindings for those, and the projects are relatively self contained, say a turn around of two to six months, you could give him a quote for a project in Java that is reasonable and he's familiar with (to make sure it doesn't appear artificially inflated). Then give a much reduced quote for the same in py+pyQt, and see if you can get him to invest on your advice.
Without tangible evidence coming from inside that a change in route will bring benefit the more management and economics savvy people who are technically ignorant will not buy into a new platform when the old one never prevented from realizing and selling.
Without a decent assessment of why he doesn't want to switch platform and what he considers risks it's kinda hard to give more pertinent advice.
Just use Netbeans as an IDE and he'll never notice :P
Speaking more seriously: a side by side comparison of strong and weak points behind each of technologies will certianly be more convincing. Just don't cheat too much in favor of Python ;)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This question could bring a lot of opinions to the table, but what I will like to get is a set of measures that will help me and my company determine the end of life of a product that we sell.
We sell a CMS system, with this system we create a few sub-products
Websites
Proposal Creator
Marketing Campaign Tracker
We are ready to start our road planning (for 2010 and 2011) and we are trying to figure when will be the end of the life of our application. Some of you might think that a very well architected application (I don't think our application is well architected) does not need to have an end of life, but this app that we are using goes back at least 6-7 years and has almost no documentation (real life). At this moment only ONE person knows how to change core functionality (scary).
Please advice,
Geo
Thanks to All! I really appreciate your comments, opinions and thoughts on this topic.
I will address a few of the post back questions in the list below
There is one developer that is able to maintain the core functionality of our product. (only and only one)
There are two developers that are able to increase functionality to a certain point. Both developers are constrained by the limitations of the core product, and they both have to work within those limits.
A very important note. The product that we are considering to put to end-of-life is for the most part being built by a contractor. The contractor is the only developer able to maintain the core functionality. We only develop on top of the contractor framework.
I will keep adding answers while I read you all responses.
Since application is very well architected you may not want to retire it and loose all investment you have made to date.
Here are my suggestions:
Have a junior developer join this
current developer.
Dump most of future updates on junior
developer (with assistance from sr.
developer)
Ask junior developer to do the
documentation of his work
Ask Sr. developer to review
documentation
Over period of time, you have another person who can support this application and it will be documented as well. Now you won't need to kill your own very well architected application with your own hands.
.
Extending this solution with Jefferey suggestion below("Sometimes rewriting is a good investment.")
If you still want to drop current application and re-write it, you still need to document existing system and create requirements for new system based off it.
Using documentation of current and proposed system, you may want to see if you can incrementally module by module upgrade (re-write) components. This is possible if application is very well architected.
As per your (Geo) comments
Geo's organization has custom third-party (with one and only one contract developer) CMS application that implements below business requirements and is paying licensing fee for support and use of his code.
Business requirements for CMS
Websites
Proposal Creator
Marketing Campaign Tracker
Here are my suggestions
Create module by module detailed use
case document for this project. Your
developer can do this or would be
ideal to have a seperate business
analyst for same.
Hire a Sr. Developer to evaluate if
open source CMS can handle all or
most of your requirements (e.g.
Joomla, Drupal, etc.).
Most important thing here would be
ability to migrate your existing data
to new system. You may need help from
your existing contract developer to
do this.
You may have to update business
process or workflow to use new
system.
Modules that cannot be implemented
using open source CMS may be required
to be implemented using custom
website.
Much of it also depends on your business relation with existing contract developer and license agreement. What you are facing is a vendor lock in scenario. You may want to further research on solutions to eliminate this vendor lock in situation.
This is just my opinion, but if this is a product that you are selling, then it all boils down to business prospects. If the product doesn't sell, then drop it. If the product has a future, then invest in it, and make it the best software you can by refactoring, rewriting, or whatever you have to do. If you have loyal customers or a strong brand, then that's worth protecting.
Sometimes rewriting the whole thing in another technology is a good investment, if the current software has a successful design that can be copied, has a strong brand, and if it can be done right.
The application reached end of life the moment it shipped without any sort of documentation. Begin development now, and you might want to consider replacing the person who knows the original system. If they've gone 6/7 years without creating any sort of documentation whatsoever, they're not someone you'll want in your company.
The only kind of documentation which will extend your system's life are things which stay consistent as the system changes in its lifetime, like test suites, self-diagnostic tools, code comments, declarative contracts like interfcaces, and automatically generated documentation.
Other manually managed documentation artifacts, like manuals, developer guides, architecture documents, data formats tend to become out of date in proportion to the amount of documentation. I would not count these as factors which increase your application life expectancy unless you have already factored in the cost of maintaining them.
If you can't "afford" developer redundancy to maintain the application reliably, there's no way you can afford to keep the documentation up to date. Lack of documentation is really a technical debt you've decided, perhaps unconsciously, to take on. If a longer lifecycle is a requirement, then the cost of that has to factor into meeting that requirement.
To make a long story short: I am in a comparable situation.
As long this application is something like a cashcow, but the company can't afford (or intend) to develop of a new application, it will not die before customers decide to buy a fresher system.
Rewriting without (documented) requirements is almost impossible.
At least the experience of specialized departments, should be documented in a way that is useful for further developments.
If you have to maintain this application, you should introduce interfaces between modules,
to reduce overall complexity. So old modules no matter how messie they are, don't care if you have to plugin new functionality.
Even if it is very well designed and functioning, the fact that it has no documentation and depends on one person for its life, means the product has very well entered an unmaintainable state. This is not a good sign. I would agree that the product is long past "End of life"
These are the kind of things I might consider when deciding if a system might be going "end-of-life":
Is the functionality that this system provides available to end-users in a cheaper, more reliable or easier to use form? If not now, then when is this likely? Is this product therefore viable in the longer term?
Is this written in a technology that customers would steer away from as it would be awkward to interface into their products, or require them to run "obsolete" platforms? Would it give a potential customer the impression that your company is significantly behind the times e.g. VB6 is probably just still OK, even in 2010, but requiring Win16 compatibility probably isn't.
Can you hire good people that know the underlying technical platform at a reasonable cost? On older tech, it might be that there are lots of people that know the platform but see it as a dead-end and will ask for a premium salary if their career is going to languish in the doldrums whilst they work on it.
If it matters to you, is the development platform still supported by the vendor? Are you going to be constrained on what hardware or OS you can run it on if the vendor is no longer updating it? Likewise, are there security holes in the platform that may need to be updated? Even niche Open Source products can suffer from this. Once a product goes out of favour and core developers move onto fresh projects, it can be difficult to get fixes done by the community.
If it is supported, are the vendors charging you a premium for supporting an older platform? If not now, then how long before they do?
How difficult is it to integrate new technologies into it to take advantage of current trends that offer enriched functionality to end-users to keep you competitive? Do you care about this? It may not matter if you're essentially running a closed system.
How difficult is it to release functional changes to it without extensive "shotgun" changes that ripple across the whole system? This comes back to how modular the system was designed to be in the first place. If you're no worse that your competitors in getting features to market then you're probably OK.
What would be the cost of re-writing the system? How does this compare to how much cheaper it would be to maintain or increase of sale revenue? It might not be economically feasible to do a whole re-write. If the development platform is sound then you could just try more of a refactoring approach. This is where having a good test-suite and documentation helps.
I went through a similiar process. We had a web app that had run for almost 8 years. In that time a lot of maintenance was done, extending it in ways we hadn't envisioned. However, the core was good and it was still able to be stretched.
What pushed us over was the maintenance cost. Finding people with the right skill set was easy 8 years ago. Today, no one wants to work in those environments; not even us :)
After analysis, we knew we could replace it within 12 months with identical functionality AND that this time spent would pay off quickly.
So, we used screen shots as our functional requirements, revamped the look and feel, and were even able to deliver increased functionality. We also looked at usage data to identify parts that were either rarely or never used and trimmed those, and focused more attention on the parts that were used.
Ultimately, we were successful. In part because everyone on the team was well versed in the new technology so there was little need for learning. Other contributing factors included a well thought out design. I think we spent 3 months just in design before writing anything.
The final factor was that our app is modular. So we were able to chunk it out in sizes small enough to have a combination of short delivery schedules with a downtime / analysis period between each deliverable. This ensured we were on the right path at every stage.