Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In Java, when you cast, let’s say, a double to an int, you do this.
int x = (int)(2.5 * 0.4);
In Python, we have this much nicer to read syntax.
x = int(2.5 * 0.4)
Where does this strange form of casting come from? Why is it used?
EDIT:
How is this primarily opinion based? I am looking for factual history on why and where this syntax came from. Please reconsider.
Java's syntax was deliberately and consciously modelled on C (and to a lesser degree) C++ syntax.
Both C and C++ use (<type>) <expr> as the syntax for type casting.
So ...
Where does this strange form of casting come from?
C and C++
Why is it used?
To further the Java design goal of syntactically similarity with C and C++.
This may seem strange to you. However, in the context in which Java was originally designed, the C & C++ languages were dominant. The designers needed to make it easy for C & C++ programmers to transition to Java. If they had ignored this, Java would most likely never have taken off.
Both of those styles have been around for a while: functional and C-like. Given C's prevalence as a code, Java mimicked the style. C++ actually allowed both styles. Python had different goals in choosing its style conventions.
Java was invented at a time when there were a huge number of C programmers in the professional work force, and a slightly smaller number of C++ programmers. The language designers deliberately made the syntax of Java very similar to C and C++, so that this large horde of people would find it easier to learn, and adapt to Java more easily. C-style casting is just one of many syntactic elements in C that Java also uses.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I want to use both python and java in the same program. Since the print() function of python is better, but java's int variable; is more efficient.
If I'm interpreting correctly, you want to use to use both interchangeably in the same file, so you'd end up with code like:
def main():
int x = 5;
print(x)
This is impossible, because there would be ambiguity when trying to interpret code if you allowed constructs from both languages. For example, "X" + 1 is allowed in java, and would give you the string "X1". In python, it would give you an error because you can't add an int to a string. This would mean that there would be no way to know what your code should do because it's runnable in both languages.
This is a problem that all of us face, where we like some parts of some languages and other parts of other languages. The solution is pretty much just to decide what's most important, choose one language based on that, and then put up with the parts you don't like.
You can use Jython, which is a Python implementation based on the JVM/JDK. This allows calling between Java and Python code in both directions.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I handed in an assignment recently, for my Computer Science course. In it I used the Random classover the Math.random() method in order to generate random numbers. My lecturer marked me down for this, stating that it was an "unnecessary complication" and that I should avoid importing classes when not absolutely needed.
I've nothing against her for this, and I accept that this is her preferred method that does have it's merits, but I would appreciate the opinion of a wider (perhaps more experienced) group- since Math.random() calls the Random class anyway and (afaik) creates a new Random object each time the method is called, wouldn't it make sense to just cut out the middleman?
Thanks
Math.random() does use Random, but it uses a single instance.
However since Math.random() is equivalent to Random.nextDouble(); which is usually not very useful, it would be foolish to use it instead of the Random class, which has plenty of convenience methods that make your intention clear and bugs less likely, as demonstrated in the following snippet.
int x = (int)(Math.random() * 100); // Without parentheses you'll always get 0
int y = rnd.nextInt(100);
I suspect your lecturer has a strong theoretical knowledge about programming.
I'd argue that your instructor is flat-out wrong. Maintainability should be one of your primary goals, and reproducibility is essential to debugging and maintainability. Math.random() gives you no control over the seeding, and consequently no reproducibility if something weird is noted during testing and debugging.
I would not be surprised if this question became closed for being too subjective.
But anyway -
I would say this depends on the context. Did your 'mistake' of using Random cause bad perfomance, unreadable code, or anything at all? If not, then I think it is fine.
One can nitpick about these kind of things, but in reality - in my opinion at least - there are larger things to worry about than such theoretical problems.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Where can I learn algorithms for programming(java etc) because when ever I search for programs such as permutations,derangements,sortings etc I always find math algorithms.
Example: Counting Derangement
From this, the following relation is derived:
!n = (n - 1) (!(n-1) + !(n-2)).\,
where !n, known as the subfactorial, represents the number of derangements, with the starting values !0 = 1 and !1 = 0.
Notice that this same recurrence formula also works for factorials with different starting values. That is 0! = 1, 1! = 1 and
n! = (n - 1) ((n-1)! + (n-2)!)\,
which is helpful in proving the limit relationship with e below.
Also, the following formulae are known:[4]
!n = n! \sum_{i=0}^n \frac{(-1)^i}{i!},
!n = \left\lfloor\frac{n!}{e}+\frac{1}{2}\right\rfloor , \quad n\geq 1,
!n = \left[ \frac{n!}{e} \right] , \quad n\geq 1
Also another example I find is when I look up sorting in java I see O(n log n) or O(log n) terminology which I don't understand at all. I am not very good at math but at the same time I am very much interested in programming. Please help me in finding a book or a site to understand sorting algorithms required in programming languages
Algorithms are about mathematics. They are language-agnostic. You can implement algorithms in any language as long as you know its grammar, i.e. its basic datatypes, operators, decision making, etc. Many languages provide libraries implementing known and/or useful algorithms or functionalities (for instance for sorting, encryption, etc.)
That's why searching for "java algorithms" is a bad search string. You should rather search for "java programming basics"
If you want to understand what lies behind (the beauty of) algorithmics, I strongly recommend reading this great book : "Programming Pearls" (2nd edition). The first edition was written in 1983, and it is interesting to understand why the author decided to write a second edition 17 years later.
You can also have a look at online lectures, for instance MIT ones.
Concerning the O(log(n)) part of your question, this is a notation to express the computational complexity of an algorithm (important when you want to understand the performance you can expect from an algorithm, or if you want to communicate the performance of your own algorithms).
For Java you can start with Oracle's tutorials.
I took Algorithms I and Algorithms II on Coursera, they are great. There is also a textbook for that course.
O(n log n) or O(log n) is Big O notation. I linked to the sections where most common cases (like examples you asked for) are explained.
There is also excellent answer on stackoverflow
See Algorithm Tutorials on Topcoder for good articles.
The Importance of Algorithms is good tutorial (explains basic algorithms, and gives examples for Big O notation).
Basics of combinatorics covers your problem -derangements.
For books- see Introduction to Algorithms and Algorithms, 4th Edition.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Java designer's felt that unsigned integers were unnecessary. Specifically, they felt that the concept of unsigned was used mostly to specify the behavior of the high-order bit, which defines the sign of an integer value. Java manages the meaning of the high-order bit differently, by adding a special "Unsigned Right Shift Operator >>>". Thus, the need for an unsigned integer type was eliminated. Then, why Java 8 will have some support for unsigned integers.
Java 8 includes just some helper methods (static methods on java.lang.Integer and java.lang.Long) that implement commonly needed operations.
Most of them are quite trivial if you know http://en.wikipedia.org/wiki/Two%27s_complement, but as experience shows many programmers have struggled (as is evident by the number of related questions of SO) to arrive at those simple solutions for these operations.
There is no magical difference between a signed and an unsigned int, viewed as bit patterns signed and unsigned look the same. The difference lies in the interpretation of said patterns. Its relatively simple to emulate any unsigned operation using signed types, so unsigned types are not an absolutely necessary language element to perform unsigned arithmetic.
In short: There are no unsigned types in Java8 because it would be a huge effort to add them (if there were primitives it would require also large additions to the bytecode and JLS).
There are some helper methods because thats what is commonly needed and hard to get right (for the average joe developer).
For the first cut, I've favored keeping the code straightforward over
trickier but potentially faster algorithms. Tests need to be written
for the unsigned divide and remainder methods, but otherwise the
regression tests are fairly extensive.
To avoid the overhead of having to deal with boxed objects, the
unsigned functionality is implemented as static methods on Integer
and Long, etc. as opposed to introducing new types like
UnsignedInteger and UnsignedLong.
http://mail.openjdk.java.net/pipermail/core-libs-dev/2012-January/008926.html
Also refer this : https://blogs.oracle.com/darcy/entry/unsigned_api
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Though oracle technotes state that :
In Java SE 7 and later, any number of underscore characters (_) can
appear anywhere between digits in a numerical literal. This feature
enables you, for example, to separate groups of digits in numeric
literals, which can improve the readability of your code.
example : float pi = 3.14_15F;
is same as
float pi = 3.1415F;
But does it not become confusing to the developers working on code written by someone else?
Also does the use of underscore put any overhead on compiler or not?
But does it not become confusing to the developers working on code written by someone else?
Only if the developers don't understand the Java language! This construct has been been supported for long enough that every Java professional should recognize it ... even if they don't use it in their own code.
On the other hand, if your Java developers have not bothered to keep up to date with the new things in Java 7, they may be (temporarily) baffled. But the real solution is to educate your developers.
Also does the use of underscore put any overhead on compiler or not?
The overhead would be so small that it is impossible to measure.
There is no performance issue here.
The only time it would make any sense to use underscores is in a very large integer or with a binary integer. Like almost any bit of syntactical freedom the language provides, people are free to misuse it and write difficult to read code. I doubt this underscore thing will become a problem any more than the freedom to add extra white space is a problem.
The best example for when you would want to use this is with binary numbers where it is customary to place a space between every 4 bits.
For instance, compare:
int bitField = 0b110111011111;
int bitField2= 0b1101_1101_1111; //clearly more readable.
Other examples might include a credit card number or SSN as given in Oracle's documentation of this feature.