Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
After reading "Java concurrent in practice" and "OSGI in practice" I found a specific subject very interesting; Safe Publication. The following is from JCIP:
To publish an object safely, both the reference to the object and the object's state must be made visible to other threads at the same time. A properly constructed object can be safely published by:
Initializing an object reference from a static initializer.
Storing a reference to it into a volatile field.
Storing a reference to it into a final field.
Storing a reference to it into a field that is properly guarded by a (synchronized) lock.
My first question: how many java developers are aware of this (problem)?
How many real world java applications are really following this, AND is this really a real problem? I have a feeling that 99% of the implemented JVMs out there are not that "evil", i.e. a thread is not guaranteed (in fact its practical (almost) "impossible") to see stale data just because the reference is not following the "safe publication idiom" above.
Proportionally, it's probably fair to say that very few programmers sufficiently understand synchronization and concurrency. Who knows how many server applications there are out there right now managing financial transactions, medical records, police records, telephony etc etc that are full of synchronization bugs and essentially work by accident, or very very occasionally fail (never heard of anybody get a phantom phone call added to their telephone bill?) for reasons that are never really looked into or gotten to the bottom of.
Object publication is a particular problem because it's often overlooked, and it's a place where it's quite reasonable for compilers to make optimisations that could result in unexpected behaviour if you don't know about it: in the JIT-compiled code, storing a pointer, then incrementing it and storing the data is a very reasonable thing to do. You might think it's "evil", but at a low level, it's really how you'd expect the JVM spec to be. (Incidentally, I've heard of real-life programs running in JRockit suffering from this problem-- it's not purely theoretical.)
If you know that your application has synchronization bugs but isn't misbehaving in your current JVM on your current hardware, then (a) congratulations; and (b), now is the time to start "walking calmly towards the fire exit", fixing your code and educating your programmers before you need to upgrade too many components.
"is this really a real problem?"
Yes absolutely. Even the most trivial web application has to confront issues surrounding concurrency. Servlets are accessed by multiple threads, for example.
The other issue is that threading and concurrency is very hard to handle correctly. It is almost too hard. That is why we are seeing trends emerge like transactional memory, and languages like Clojure that hopefully make concurrency easier to deal with. But we have a ways to go before these become main stream. Thus we have to do the best with what we have. Reading JCiP is a very good start.
Firstly "safe publication" is not really an idiom (IMO). It comes straight from the language.
There have been cases of problems with unsafe publication, with use of NIO for instance.
Most Java code is very badly written. Threaded code is obviously more difficult than average line-of-business code.
It's not a matter of being "evil". It is a real problem, and will become much more apparent with the rise of multicore architectures in the coming years. I have seen very real production bugs due to improper synchronization. And to answer your other question, I would say that very few programmers are aware of the issue, even among otherwise "good" developers.
I would say very few programmers are away of this issue. When was the last code example you have seen that used the volatile keyword? However, most of the other conditioned mentioned - I just took for granted as best practices.
If a developer completely neglects those conditions, they will quickly encounter multi-threading errors.
My experience (short-terming and consulting in lots of different kinds of environments
Most applications I've seen) agrees with this intuition - I've never seen an entire system clearly architected to manage this problem carefully (well, I've also almost never seen an entire system clearly architected) . I've worked with very, very few developers with a good knowledge of threading issues.
Especially with web apps, you can often get away with this, or at least seem to get away with it. If you have spring-based instantiations managing your object creation and stateless servlets, you can often pretend that there's no such thing as synchronization, and this is sort where lots of applications end up. Eventually someone starts putting some shared state where it doesn't belong and 3 months later someone notices some wierd intermittent errors. This is often "good enough" for many people (as long as you're not writing banking transactions).
How many java developer are aware of this problem? Hard to say, as it depends heavily on where you work.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
For the past few weeks now I've been studying Concurrency(Multithreading) in Java. I find it difficult and rather different than anything I've encountered in the Java language so far(or in programming in general). Often I have to reread and reread over and over again until I start to understand a small concept fully.
It's frustrating and I've wondered why this part of the Java programming language has given me so much trouble.
Usually when I look at the code of a single-threaded program I look at the main method and start going step by step in my mind through the whole execution(like a debugger). Throughout this process I try to keep in mind EVERYTHING like variables and their states(values) at every point in the execution. Often times when doing that I even stop at certain points and think how the program execution would alter in different scenarios. If I can go through a program from start to finish like that, I feel like I've fully understood the code and the material.
The problem that I have, I suppose, is that when I try to apply this method for a concurrent application, there are so much things happening at once(sleep(), synchronized methods, acquiring intrinsic locks, guarded blocks using wait(), etc.) and there's so much uncertainty of when something will execute, that it becomes nearly impossible for me to keep up with everything. That's what frustrates me, because I want to have a feeling of "I have control over what's happening", but with concurrency that's impossible.
Any help would be appreciated!!!
Concurrency is a simple concept, really - you have several separate paths of execution, which can interact with each other. The stuff you mentioned, like syncing, blocks, waits and so on are technical details, tools.
I would suggest trying to do some coding :-) Come up with a multi-thread program idea and code it. At some point you will need to use one of the tools you listed and it will all begin to fall into place. This is NOT a concept you should to understand only in theory ^^
More over a Science, Concurrent Programming is an art.
Before going into Java concurrency, PLEASE DO go through the conceptual things first.. i.e. what are the major problems in concurrency? what is a lock? what is a semaphore? what is a barrier? why we are using those? how can we use them for different purpose like variable protection, synchronization etc.? Likewise there are some.
Then you would probably get some very important knowledge before getting into language specific usage.
For a person who have followed sequential programming all the time and first looking at concurrency, definitely it would be harder to understand those things at once. But I am sure you can go to the level that you are in sequential programming for concurrent programming also after some time. :))
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
It's only worth it if it solves a real problem, note, that problem could be
I want to learn something better
I need it to go faster to reduce power requirements in my colo.
I need to hire more people and the talent pool for [insert language here]
is too small.
Insert innumerable real problems here.
Python and Java are both suitable for production. Write it in whatever makes it easiest to solve the problems you and or your team are facing and if you want to preempt some problems make sure you've done your homework. Plenty of projects have died because they chose C/C++ believing performance was going to be a major factor without thinking about the extra effort involved in using these language well.
You mentioned maintainability. You're likely to require more code to rewrite it in Java and there's a direct correlation between Bugs and LOC. It's up for debate which one is easier to maintain. I'm sure both camps believe theirs is.
Of the two which one do you enjoy coding with the most?
The crucial question is this one: "Java's static typing including seems to make it less prone to errors on a larger scale". The crucial word here is "seems." Sure, Java will help you catch this one particular type of error. But how important is that, and what do you have to pay for it? The overhead imposed by Java's type system means that you have to write more lines of code, which means reduced productivity. I've used both and I have no doubt that I'm more productive in Python. I have found that type-related bugs in Python are generally easy to find and fix. Keep in mind that in a professional environment you're not going to ship code without testing it pretty carefully. The bottom line for a programming environment is productivity - usable functionality per unit of effort, not the number of bugs you found and fixed during development.
My advice: if you have a working project written in Python, don't rewrite it unless you're certain there's a benefit.
Java is inherently object oriented. Alternatively python is procedural.
As far as the ability of the language to handle large projects you can make do with either.
As far as producing more usable products I would recommend java script as opposed to java because of its viability in the browser. By embedding your js in a publicly hosted website you allow people with no coding knowledge to run your project seamlessly in the browser.
Further more all the GUI design features of HTML are available at your disposal.
That said any language has it's ups and downs and anything I've said here is simply my perception.
In many languages, for me specifically, Java and C++, there is an massive standard library. Many classic problems in computer science, search, sorting, hashing etc etc... are implemented in this library. My question is, are there any benefits of say implementing one's own algorithm versus simply using the library's version? Are there any particular instances were this would be true?
I only ask because in school a huge deal of time is spent on say sorting, however in my actual code I have found no reason to utilize this knowledge when people have already implemented and optimized a sorting algorithm in both Java and C++.
EDIT: I discussed this at length with a professor I know and I posted his response, can anyone think of more to add to it?
Most of the time, the stock library functions will be more performant than anything you'll custom code.
If you have a highly specific (as opposed to a generic) problem, you may find a performance gain by coding a specialized function, but as a developer you should make a conscious effort to not "reinvent the wheel."
Sorting is a good example to consider. If you know nothing whatsoever about the data to be sorted, except how to compare elements, then the standard sort algorithms fare well. In this situation, in C++, the STL sort will do fine.
But sometimes you know more about your data. For example, if your data consists of uniformly distributed numbers, a radix sort can be much faster. But radix sort is 'invasive' in the sense that it needs to know more about your data than simply whether one number is bigger than another. That makes it harder to write a generic interface that can be shared by everyone. So STL lacks radix sort and for this case you can do better by writing your own code.
In general, standard libraries contain very fast code for very general problems. If you have a specific problem, you can in many cases do better than the library. Of course, you may eventually come across a complex problem which is not solved by a library, in which case the knowledge you have gained from studying solutions to solved problems could prove invaluable.
In college, or school, or if learning as a recreational programmer, you will be (or in my strident opinion, you should be) encouraged to implement a subset of these things yourself. Why? To learn. Tackling the implementation of an important already invented wheel (the B-Tree) for me was one of the most formative experiences of my time in college.
Sure I would agree that as a developer you should make an effort not to reinvent the wheel, but when learning through formative experiences, different rules apply. I read somewhere else on this forum that to use something at abstraction level N, it is a very good idea to have a working knowledge of abstraction level N-1, and be familiar with level N-2. I would agree. In addition to being formative, it prepares you for the day when you do encounter a problem when the stock libraries are not a good fit. Believe me this can happen in your 50 year career. If you are learning fundamentals such as data structures, where the end goal is not the completeness of your finished product but, instead, self improvement, it is time well spent to "re-invent the wheel".
Is pre-algebra/algebra/trigonometry/calculus worth learning?
I can't tell if this is a "am I wasting my time/money in school" aimed question or if this is a sincere question of if your own version is going to be better.
As for wasting your time/money in school: If all you want to do is take pot shots at developing a useful application, then you're absolutely wasting your time by learning about these already-implemented algorithms -- you just need to kludge something together that works good 'nuff.
On the other hand if you're trying to make something that really matters, needs to be fast, and needs to be the right tool for the right job -- well, then it often doesn't exist already and you'll be back at some site like Stack Overflow asking first or second year computer science questions because you're not familiar enough with existing techniques to roll your own variations.
Depending on my job, I've been on both sides. Do I need to develop it fast, or does it have to work well? For fast application programming, it's stock functions galore unless there's a performance or functionality hindrance I absolutely must resolve. For professional game programming it has to run blazing fast. That's when the real knowledge kicks into memory management, IO access optimization, computational geometry, low level and algorithmic optimization, and all sorts of clever fun. And it's rarely ever a stock implementation that gets the job done.
And did I learn most of that in school? No, because already knew most of it, but the degrees helped without a doubt. On the other hand you don't know most of it (otherwise you wouldn't be asking), so yes, in short: It is worthwhile.
Some specific examples:
If you ever want to make truly amazing games, live and breath algorithms so you can code what other people can't. If you want to make fun games that aren't particularly amazing, use stock code and focus on design. It's limiting, but it's faster development.
If you want to program embedded devices (a rather large market), often stock code just won't do. Often there's a code or data memory constraint that the library implementations won't satisfy.
If you need serious server performance from modest hardware, stock code won't do. (See this Slashdot entry.)
If you ever want to do any interesting phone development the resource crunch requires you to get clever, even often for "boring" applications. (User experience is everything, and the stock sort function on a large section of data is often just too slow.)
Often the libraries you're restricted to using don't do what you need. (For example, C# doesn't have a "stable" sort method. I run into this annoyance all the time and have since written my own solution.)
If you're dealing with large amounts of data (most businesses have it these days) you'll end up running into situations where an interface is too slow and needs some clever workarounds, often involving good use of custom data structures.
Those libraries offer you tested implementations that work well, so the rule of thumb is to use those implementations. If you have a very particular/complex problem where you can use some domain knowledge you have a case were you will need to implement your own version of an algorithm.
I remember an example Bill Pugh gave in his programming languages class where they analyzed the performance of a complex application and they realized a faulty custom implementation of a sorting algorithm by a programmer (that code was used many times in the real runs of the application) was responsible for 90% performance decrease!
After discussing this at length with professor of Computer Science, here were his opinions:
Reasons to Use Libraries
1. You are writing code with a deadline.
There is no sense in hampering your ability to complete a project in a quick and timely manner. That's why libraries are written after all, to save time and avoid "reinventing the wheel"
2. If you want to optimize your code fully.
Chances are the team of incredibly talented people who wrote the algorithm in Java or C++'s or whoever's library did a far better job at optimizing their algorithm for that language in however long it took them than you can possibly do in an hour or two. Or four.
3. You've already done previously solved this problem.
If you have already solved this problem and have a good complete understanding of how it is designed you don't need to labor over a complex solution as you don't stand to gain much benefit.
That being said, there are still many reasons to make your own solution.
Reasons to Do It Yourself
1. A fundamental understanding of problem solving techniques and algorithms are completely necessary once you reach a problem that is better optimized by a non-library solution.
If you have a highly specified problem, such things often come up when working with networking or gaming or such. It becomes invaluable to be able to spot situations in which a specific algorithm will outperform the libraries version.
2. Having a very good understanding of algorithms and their design and use makes you much more valuable in the work place.
Any halfway decent programmer can write a function to compare two objects and then toss them into a library function, however the one that is able to spot a situation and ultimately improve the programs functionality and speed is going to be looked upon well by management.
3. Having the concept of how to do something is often just as, if not more so, valuable than being able to do it.
With an outstanding knowledge of Java's libraries and how to use them, chances are you can field any problem in java with reasonable success. However when you get hired to work in erlang you're going to have some rough times ahead. Where if you had known how and not merely what Java's libraries did, you could move those ideas to any language.
4. We as programmers are never truly satisfied with merely having something "work".
Chances are that you have an itch to understand why things work. It was this curiosity that probably drove you to this area of study. Don't deny this curiosity! Encourage it and learn to your hearts content.
5. Finally, there is a huge feeling of success and accomplishment that comes with creating your own personal way of sorting or hashing etc.
Just imagine how cool your friends will see you when you proclaim that you can find the shortest path between 2 vertices in n log(n) time! On a serious note, it is very rewarding to know that you are completely capable of understanding and choosing an optimum solution based on knowledge. Not what some library gives you.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I know embedded C is used for micro-controllers along with other languages. but what if the control was from a PC, well I had two possible candidates (java and c++)
Java is simple and easy also Developer friendly when it comes to threading or GUI, but of course C++ is so much better performance (I know computers getting faster, and performance depend on good Algorithms ) but the compilation makefiles, shared-library and cross compiling wastes lots of time caring about technicalities when I should be working on other Important issues.
But still I've faced something like Const references which java doesn't support and force you to use clone() or copying and when that came to arrays it was a giant mess,
NOTE: I'm going to use reverse kinematics and maybe Neural network for pattern recognition. which requires tons of Calculations. but as I said I care also about the whole life cycle of the project (speed of development, performance , user friendliness and quick deployment)
I'm swinging between languages and i'm planning for long term learning process so I don't want to waste that in the wrong language or let's say (without asking) so please help and I hope this question won't be considered subjective but a reference.
cheers
Why you eliminated C?
Why do you think java has worse performances then c++? Some things are as good as c++, and it is easy to use java program on different platforms without much hassle.
Just pick the language you feel comfortable and you have most experience with, and go with it.
Personally I would lean toward C++. Java has a garbage collector, which can put your app to sleep at random. In C++ I have to collect my own garbage, which gives me an incentive to generate less of it. Also C++ allows macros, which I know have been declared a bad thing by Java-nistas, but I use as a way of shortening the code and making it more like a DSL. Making the code more like a DSL is the main way I shorten development effort and minimize introducing bugs.
I wouldn't assume that Java is inherently slower than either C++ or C. IME slowness (and bigness) comes not from how well they spin cycles, but from the design practices that they encourage you to follow. The nice things they give you, like collection classes, are usually well-built, but that doesn't stop you from over-using them because they are so convenient.
IME, the secret of good performance is to have as little data structure as possible (i.e. minimal garbage), and keep it as normalized as possible. That way, you minimize the need to keep it consistent via message-waves. To the extent the data has to be unnormalized, it is better to be able to tolerate temporary inconsistency, that you periodically patch up, than to try to keep it always consistent through notifications (which OO languages encourage you to do). Unless carefully monitored, those make it extremely easy to introduce performance bugs.
Here's an example of some of these points.
I wouldnt worry too much about performance at first - write the code in whatever language you feel comfortable in and then refactor as necessary.
You can always use something like JNI to call out to c/c++ if needed, although the performance gap between Java and c/c++ is nowhere near what it was...
Depending upon your circumstance, Java is no more quick to deploy than is C++. This mainly boils down to: are you guaranteed the same environment in your testbed that you are in production? With all of the modern additions to C++, there is little cause to suggest that Java is easier on the developer unless you are still new to the C++ language.
That aside, you have performance concerns. Unless it is a real-time system, there's no reason to eliminate any language just yet. If you code your Java intelligently (for instance, do your best to avoid copying objects and creating garbage in the most-used sections), the performance differences won't be seriously noticeable for a compute-bound process.
All told, I think you are focusing too much on textbook definitions of these two languages rather than actual use. You haven't really given any overriding reason to choose one over the other.
Java is a bit more portable, but as far as I know the only real factor for something like this is personal preference.
It would really help if You described Your problem in greater detail.
You are willing to use IK, that might suggest some robotic arm manipulation. What it doesn't say are your real time requirements. If it's going on a class-A production line it'll be hard to get away with garbage collected language.
Java is great. There are some very mature NN libraries (Neuroph, Encog) which could save You a lot of coding time. I don't know of any IK library, but I'm sure there also are at least good matrix manipulation libraries to help.
The Garbage Collection in Java is getting better and better. The latest one (G1) is a lot better than anything else, but even with it the best You can get is soft real time. So You can't expect pause-free run.
On the other hand You also might want to look at some dedicated environments - Matlab toolboxes for robotics and artificial intelligence. I think that would yield fastest prototypes.
If it's going on production than You are pretty much stuck with C or C++.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am constantly reading about how much Cobol code is still in production. And the main reason that it hasn't been updated into am more modern language is that it would take too long/cost too much.
My question is: If there was a tool that converted Cobol to, say, Java, would any organizations find it useful? Or would they rather continue maintaining what they know already works?
Currently, a large volume of the COBOL code (I'd estimate well over 90%) is untestable.
No one knows what it really does.
They know that -- minimally -- it does the expected job most of the time. And when it doesn't, the bugs are known.
Worse, some percentage of COBOL is just workarounds for bugs in other parts of the COBOL.
Therefore, if you subject it to any scrutiny, you'll find that you don't know what's really going on. You can't create test cases.
Indeed, you'll find that most organizations can't even agree on what's "right". But they're willing to compromise on what's available.
The cost and risk of examining the core business processing is unthinkable.
Any conversion tool would have risks associated with it, and the resulting code would have to undergo a lot of testing.
Given that a lot of these systems are in use daily to run a business, a lot rides on the continuing operation. So it is not just "how long" or "how expensive", but can we trust it to work 100% the same.
One will always find tools to convert one language to another - they usually go by the term "compilers".
There is always a shortcoming with compilers that have to perform the task of converting code in language X to language Y, especially when the said code was written by a person. That shortcoming happens to be the fact that readbility is often lost in the process of translation. There is no guarantee that the code compiled from COBOL to Java will be understood by any programmer, so in effect the cost of translation has actually increased. In fact, it is difficult to define readability in such a context.
Lack of readability and understandability translates into lack of knowledge of runtime behavior of the translated code. Besides there is no guarantee that people understand the original code completely; surely they do understand bits and pieces of it.
Probably a little of both. There are companies that provide tools and services for conversion using both automated and manual techniques.
Many companies, however, follow the "ain't broke" philosophy, which is likely as wise as anything. Especially since many conversions result in attempts to "improve" the existing system or try to introduce modern software design/construction philosophies and result in a mess.
Many systems written in Cobol have many transactions going though them. They work well on the mainframe platforms that they run on. It would be risky to change them just for the sake of change.
I think some organizations could find it useful, particularly organizations where interfacing with/designing around legacy code has become more costly and problematic than converting the code to Java (or another language)
while ( (CostToPortToJava > CostOfNotPortingOverTime++) && DoesLegacyCodeStillWork() )
{
StayWithLegacyCode();
}
PortCodeToJava();
There are a few factors here:
Cobol program files are super long and just about always on ultra-secure mainframes. Usually the Java developers don't have access to them.
Colleges & Universities haven't taugh Cobol for more than 20 years. As a result, all of the really top-notch Cobol developers have moved up in their companies to be replaced with a bunch of tech school grads. These people didn't love programming enough to be hackers (or they'd do C, Python, C++, whatever and wouldn't have taken a course) or enough to go school (and be Java, .Net, Python, whatever).
Java developers generally lose their minds when they look at Cobol programs in their 50,000 line glory, so they aren't any help.
There really aren't any documents, and the logic is so tight in these programs that you should really just read them and convert them.
Most of these companies are financial companies where the best way to blowup and not be in the industry anymore is to screw something up. Good way to screw something up is to tack something like converting a critical task from Cobol to Java.
It's going to take a long time - every so often, part of one of the programs stops working or can't do something, and it gets replaced. I don't see a lot of senior managers having the stomach for the all of the FUD in one of these projects, and the timeframes are pretty long in terms of return on money spent.
COBOL is, in effect, a superb DSL (domain specific language).
It's domain is business rules as embedded in (mainly) backend applications.
Find another language that....
is feature rich in that specific domain
has some years of actual, applied, experience behind it so all the gotchas are cured or out in the open
has a TCO (total cost of ownership) lower than the existing COBOL legacy mountain
is cost-effective to convert to
....and you will have the killer application for backend business applications.
Something to realize about old COBOL applications, besides the language dissimilarity, is that at a lot of data structures built in these applications don't conform to any later RDBMS structure, so really you would be talking about rethinking a lot of the underlying architecture and design, not just changing the language syntax, and replacing that would have a lot of performance risk once it hit real world loads, even if it could be QA'd sufficiently.
The bottom line is that it is more economical to bolt on new features in a modern language than rewrite it. As long as that continues to be the case, COBOL will continue to live on.
Cobol has the advantage of being fast for moving data around, which is what that kind of applications tend to do a LOT. Also the machines are designed for I/O, not processing speeds. Hence, any translation to another language will most likely be slower than the Cobol counterpart on identical or similar hardware, leaving no reason to do so.
Let me ask a counter question: WHY convert it, if you have something in place that works?
(Similar to tearing down a bridge ever 10 years just for rebuilding it again right afterwards - it is usually always cheaper just to maintain what you have).
There are translators around which can be modified at little cost to make it run on a specific machine or operating system and some are available from England and can be run there or on site. Standard versions exist for the major models (anyone can contact me about them). Cobol to another language source code or script is relatively easy to do automatically and would produce a text file for import into a source file on the target machine with 95 percent or more code compatibility. Simple manual amendments are all that are necessary before running the compiler or JIT software to achieve a new program - do not forget to amend the job command language or macro for mainframe jobs when testing or going live. New cobol compilers exist for ICT/ICL mainframes and one or two others and these compile faster than the old software and sometimes the new compiled program can run several times faster.