How are keywords represented in binary form? - java

How are keywords represented in binary form?
For ex:: In java, how is the sin() represented in binary? How is sqrt() and other functions represented.
If not only in java, in any language how is it represented?? because ultimately everything is translated into binary and then into on and off signals.
Thanks in advance.

Firstly, sin is not a keyword in Java. It is an identifier. Keywords are things like if, class, and so on.
It depends on when you are asking about.
In the source code, the sin identifier is represented as characters, and those characters are represented as bits (i.e. binary) .... if you want to look at it that way.
In the classfile that is output by the javac compiler, the word sin is represented as string in the Constant Pool. (The JVM spec specifies the format of classfiles in great detail.)
When the classfile is first loaded by a JVM, the word sin becomes a Java String object.
When the code is linked by the JVM, the reference to the String is resolved to some kind of reference to a method. (The details are implementation specific. You'd need to read the JVM source code to find out more.)
When the code is JIT compiler, the reference to the method (typically) turns into the address in memory of the first native instruction of the JIT compiled method. (Strictly speaking, this is not "assembly language". But the native instructions could be represented as assembly language. Assembly language is really just a "human friendly" textual representation of the instructions.)
so how does the computer know that when sin is written it has to do the sine of a number.
What happens is that the Java runtime loads that class containing the method. Then it looks for the sin(double) method in the class that it loaded. What typically happens is that the named method resolves to some bytecodes that are the instructions that tell the runtime what the method should do. But in the case of sin, the method is a native method, and the instructions are actually native instructions that are part of one of the JVM's native libraries.
If not of methods, Can we have binary representation of Keywords?? Like int, and float etc??
It depends on the actual keywords. But generally speaking, genuine Java keywords are transformed by the compiler into a form that doesn't have a distinct / discrete representation for the individual keywords.

If not only in java, in any language how is it represented?? because ultimately everything is translated into binary and then into on and off signals.
This tells me that you probably have a fundamental misunderstanding of how programming languages are implemented. So instead of answering this question (it doesn't really have a proper answer other than "well they're not represented at all"), I will try to help you understand why this question is the wrong one to ask.
Your computer runs machine code, and only machine code. You can feed it any random sequence of bytes, it doesn't matter what they were intended to be, as soon as you point the program counter to it it will be interpreted as if it is machine code (of course giving it bytes that were not intended to be machine code is probably a bad idea). As a running example, I'll use this x64 code:
48 01 F7 48 89 F8 C3
If you have no idea what's going on, that's normal at this level. Most people don't read machine code (but they could if they learned it, it's not magic). This is where the zeroes and ones are, to the processor it's not even in hexadecimal, that's just what humans like to read.
At a level above that there is assembly, which is in most cases really just a different way of looking at machine code, in such a way that humans find it easier to read. The example from earlier looks more sensible in assembly:
add rdi, rsi
mov rax, rdi
ret
Still not very clear what's going on to someone who doesn't know x64 assembly, but at least it gives some sort of clue: there's an add in it. It probably adds things.
At a yet higher level, you could have java bytecode or java, but I think the java aspect of this question misses the point, it's probably there because OP doesn't realize that java is different from "the classic picture". Java just complicates matters without explaining the big picture. Let's use C instead. The example in C could look like:
int64_t foo_or_whatever(int64_t x, int64_t y)
{
return x + y;
}
If you don't know C but you do know Java, the only strange thing here is int64_t, which is roughly the equivalent of a long in Java.
So yes, things were added, as the assembly code suggested. Now where did the keywords go?
That question doesn't make as much sense as you thought it did. The compiler understands keywords, and uses them to create machine code that implements your program. After that point they stop being relevant. They only mean something in the context of the high level language that you wrote the code in, you could say that at that level, they are stored as ASCII or UTF8 string in a file. They have nothing to do with machine code, they do not appear in any form there, and you can write machine code without having translated it from a high level language that has keywords. That return and ret looks vaguely similar is a bit of a red herring, they have something to do with each other but the relation is far from simple (that it worked out simply in the example I'm using is of course no accident).
The int64_t has perhaps not entirely disappeared (mostly it has, though). The fact that the addition operates on 64bit integers is encoded in the instruction 48 01 F7. Not the keyword int64_t (which isn't even a keyword, but let's not get into that), "the fact that what you have there is an addition between 64bit integers", which is an conceptually different thing though caused here by the use of int64_t. To split that instruction out while skipping some of the detail (because this is a beginner question), there's
48 = 01001000 encoding REX.W, meaning this instruction is 64bit
01 = 00000001 encoding add rm64, r64 in this case
D1 = 11010001 encoding the operands rdi and rsi
To learn more about what the processor does with machine code (in case your follow-up question is "but how does it know what to do with something like 48 01 F7"), study computer architecture. If you want a book, I recommend Computer Architecture, Fifth Edition: A Quantitative Approach, which is quite accessible to beginners and commonly used in first-year courses about computer architectures.
To learn more about the journey from high level language to machine code, study compiler construction. If you want a book, I recommend Compilers: Principles, Techniques, and Tools, but it may be hard to get through it as a beginner. If you want a free course, you could follow Compilers on Coursera (the first few lectures especially will give you an overview of what compilers do without getting too technical yet).
Incidentally, if you give the example C code to GCC, it makes
lea rax, [rdi + rsi]
ret
It's still doing the same thing, but in a way that didn't fit my story, so I took the liberty of doing it in a slightly different way.

sin() is a function so it's represented as a memory address where its code block is.
Keywords (like for) aren't represented as binary, for for example is converted to a list of byte code jump instructions which are compiled into assembly instructions which are represented as binary.
My point is that you cannot convert most keywords directly into binary. You can unroll them into bytecode which you could then convert to native machine code and binary but not directly to binary.
Here, read this then after you understand it move onto how bytecode is converted to native code.
Keywords and Functions
That said, a keyword in Java (and most languages) is a reserved word like for, while or return but your examples are not keywords, they are function names like sin() and sqrt()

Not really sure what you want to know here; so let's go "bytecode"...
Both the .sin() and .sqrt() methods are static methods from the Math class; therefore, the compiler will generate a call site with both arguments, a reference to the method and then call invokestatic.
Other than invokestatic, you have invokevirtual, invokespecial, invokeinterface and (since Java 7) invokedynamic.
Now, at runtime, the JIT will kick in; and the JIT may end up producing pure native code, but this is not a guarantee. In any event, the code will be fast enough.
And the same goes for the JDK libraries themselves; the JIT will kick in and maybe turn the byte code into native code given a sufficient time to analyze it (escape analysis, inlining etc).
And since the JIT does "whatever it wants", you reliably cannot have a "binary" representation of any method from any class.

Related

Why was arg = args[n++] more efficient than 2 seperate statements in earlier compilers?

From the Book "Core Java for the Impatient", Chapter "increment and decrement operators"
String arg = args[n++];
sets arg to args[n], and then increments n. This made sense thirty
years ago when compilers didn’t do a good job optimizing code.
Nowadays, there is no performance drawback in using two separate
statements, and many programmers find the explicit form easier to
read.
I thought such usage of increment and decrement operators was only used in order to write less code, but according to this quote it wasn't so in the past.
What was the performance benefit of writing statements such as String arg = args[n++]?
Some processors, like the Motorola 68000, support addressing modes that specifically dereference a pointer, then increment it. For instance:
Older compilers might conceivably be able to use this addressing mode on an expression like *p++ or arr[i++], but might not be able to recognize it split across two statements.
Over years architectures and compilers became better. Given the improvements in architectures of CPUs and compilers I would say there is no single answer to it.
From the architecuture standpoint - many processors support STORE & POINTER AUTO-INCREMENT as a one CPU cycle. So in the past - the way you wrote the code would impact the result (one vs more operations). Most notably DSP architectures were good at paralleling things (e.g. TI DSPs like C54xx with post-increment and post-decrement instructions and instructions that you can execute in circular buffers - e.g. *"ADD *AR2+, AR2–, A ;after accessing the operands, AR2 ;is incremented by one." - from TMS320C54x DSP reference set). ARM cores also feature instructions that allows for similar parallelism (VLDR, VSTR instructions - see documentation )
From the compiler standpoint - Compiler looks at how variable is used in its scope (what could not be the the case before). It can see if the variable is reused later or not. It might be the case that in the code a variable is increased but then discarded. What is the point of doing that?Nowadays compiler has to track variable usage and it can make smart decisions based on that (if you look at Java 8 - the compiler must be able to spot "effectively final" variables that are not reassigned).
These operators were/are generally used for convenience by programmers rather than to achieve performance. Because effectively, the statement would get split into a two line statement during compilation!! Apparently, the overhead for performing Post/Pre-increment/decrement operators would be more as compared to an already split two liner statement!

Using int flags in lieu of booleans

So, for example, Notification has the following flag:
public static final int FLAG_AUTO_CANCEL = 0x00000010;
This is hexadecimal for the number 16. There are other flags with values:
0x00000020
0x00000040
0x00000080
Each time, it goes up by a power of 2. Converting this to binary, we get:
00010000
00100000
01000000
10000000
Hence, we can use a bitwise operators to determine which of the flags are present, etc, since each flag contains only one 1 and they are all in different locations.
Question:
This all makes perfect sense, but why not just use booleans? Is this merely stylistic, or are there memory or efficiency benefits?
EDIT:
I understand that by combining them, we can store a lot of information in a single int. Is this used solely so we can pass a lot of boolean type values in a single int instead of having to pass a ton of parameters? I don't mean to trivialize that, it's very convenient, but are there any other benefits?
What you're talking about is called a Bit Field. One advantage is that all the information can be contained in a single variable (with no overhead like that of an ArrayList). This is useful for keeping function signatures tidy, and will have some minor benefits with efficiency because of fewer stack operations, but probably this will be offset by additional bitshift operations. Additionally, you can use (for example) one byte to store 8 fields rather than wasting 7 additional bytes. You can also, if you're clever with it, perform several flag checks in a single operation.
Having said that, personal preference may see the list of booleans as cleaner or preferable. Bitfields are most common in embedded systems where space is limited or something of that nature.
In reference to your edit: it's storing the values of the flags in ints, but those are just reference constants-- you aren't editing those, you're sticking those bits into (or out of) the flags field, which is a single int. I don't really know why they chose a bitfield for this application; perhaps someone that grew up programming space-limited microcontrollers coded that specific class. The general consensus seems to be that bitfields shouldn't be included in new code.
This is a common idiom in C, where resource constraints are a much larger concern, and you usually see it in Java where the Java API is directly mapping an underlying well-known C API. However, it's not a great idea in Java for a wide number of reasons.
As of Java 5, most of the uses for one-bit bit fields are taken care of very nicely by EnumSet, which is internally implemented using a bit field (so it's extremely fast) but is type-safe, easy to read, and Iterable.

Simple physical quantity measurement unit parser for Java

I want to be able to parse expressions representing physical quantities like
g/l
m/s^2
m/s/kg
m/(s*kg)
kg*m*s
°F/(lb*s^2)
and so on. In the simplest way possible. Is it possible to do so using something like Pyparsing (if such a thing exists for Java), or should I use more complex tools like Java CUP?
EDIT: To answere MrD's question the goal is to make conversion between quantities, so for example convert g to kg (this one is simple...), or maybe °F/(kg*s^2) to K/(lb*h^2) supposing h is four hour and lb for pounds
This is harder than it looks. (I have done a fair amount of work here). The main problem is there is no standard (I have worked with NIST on units and although they have finally created a markup language few people use it). So it's really a form of natural language processing and has to deal with :
ambiguity (what does "M" mean - meters or mega)
inconsistent punctuation
abbreviations
symbols (e.g. "mu" for micro)
unclear semantics (e.g. is kg/m/s the same as kg/(m*s)?
If you are just creating a toy system then you should create a BNF for the system and make sure that all examples adhere to it. This will use common punctuation ("/", "", "(", ")", "^"). Character fields can be of variable length ("m", "kg", "lb"). Algebra on these strings ("kg" -> 1000"g" has problems as kg is a fundamental unit.
If you are doing it seriously then ANTLR (#Yaugen) is useful, but be aware that units in the wild will not follow a regular grammar due to the inconsistencies above.
If you are REALLY serious (i.e. prepared to put in a solid month), I'd be interested to know. :-)
My current approach (which is outside the scope of your question) is to collect a large number of examples from the literature automatically and create a number of heuristics.

Is this Java in "The Art of Multiprocessor Programming" or fancy pseudocode?

I've started reading "The Art of Multiprocessor Programming". Seems like a great book. It claims to have examples written in Java, and it really seems this way in the beginning, to the level that they can be copied and run as-is. However, quite quickly I start to see features which I had no idea were in Java. I guess they're not and the book simply uses fancy Java-like pseudocode, but it still doesn't hurt to verify.
I'm talking about things like:
Using the existential quantifier in a while condition, e.g.
while(\exists k != me) (level[k] >= i && victim[i] == me)
(replace \exists with the actual mathematical sign; recall that Haskell has similar things).
Using tuples and lexicographical ordering built-in to the syntax, e.g.
(label[k], k) << (label[i], i)
Which compares the left component and if needed, the right component.
As far as I know this is pseudocode and not Java, but I'm hardly familiar with this language.
It's not Java. I didn't check in detail, but e.g. 02~Chapter_02.zip/ch02/Mutex/src/mutex/Bakery.java from the book's website seems to be the program the first code fragment originate from expressed in "real" Java.

BigDecimal notation eclipse plugin or nice external tool

I need to make a lot of operations using BigDecimal, and I found having to express
Double a = b - c * d; //natural way
as
BigDecimal a = b.subtract(c.multiply(d))//BigDecimal way
is not only ugly, but a source of mistakes and communication problems between me and business analysts. They were perfectly able to read code with Doubles, but now they can't.
Of course a perfect solution will be java support for operator overloading, but since this not going to happen, I'm looking for an eclipse plugin or even an external tool that make an automatic conversion from "natural way" to "bigdecimal way".
I'm not trying to preprocess source code or dynamic translation or any complex thing, I just want something I can input text and get text, and keep the "natural way" as a comment in source code.
P.S.: I've found this incredible smart hack but I don't want to start doing bytecode manipulation. Maybe I can use that to create a Natural2BigDecimal translator, but I don't want to reinvent the wheel if someone has already done such a tool.
I don't want to switch to Scala/Groovy/JavaScript and I also can't, company rules forbid anything but java in server side code.
"I'm not trying to preprocess source code ... I just want something I can input [bigDecimal arithmetic expression] text".
Half of solving a problem is recognizing the problem for what it is. You exactly want something to preprocess your BigDecimal expressions to produce legal Java.
You have only two basic choices:
A stand-alone "domain specific language" and DSL compiler that accepts "standard" expressions and converts them directly to Java code. (This is one kind of preprocessor). This leaves you with the problem of keeping all the expression fragments around, and somehow knowing where to put them in the Java code.
A tool that reads the Java source text, finds such expressions, and converts them to BigDecimal in the text. I'd suggest something that let you code the expressions outside the actual code and inserted the translation.
Perhaps (stolen from another answer):
// BigDecimal a = b - c * d;
BigDecimal a = b.subtract( c.multiply( d ) );
with the meaning "compile the big decimal expression in the comment into its java equivalent, and replace the following statement with that translation.
To implement the second idea, you need a program transformation system, which can apply source-to-source rewriting rules to transforms (generate as a special case of transform) the code. This is just a preprocessor that is organized to be customizable to your needs.
Our DMS Software Reengineering Toolkit with its Java Front End could do this. You need a full Java parser to do that transformation part; you'll want name and type resolution so that you can parse/check the proposed expression for sanity.
While I agree that the as-is Java notation is ugly, and your proposal would make it prettier, my personal opinion is this isn't worth the effort. You end up with a dependency on a complex tool (yes, DMS is complex: manipulating code isn't easy) for a rather marginal gain.
If you and your team wrote thousands of these formulas, or the writers of such formulas were Java-naive it might make sense. In that case,
I'd go further, and simply insist you write the standard expression format where you need it. You could customize the Java Front End to detect when the operand types were of decimal type, and do the rewriting for you. Then you simply run this preprocessor before every Java compilation step.
I agree, it's very cumbersome! I use proper documentation (comments before each equation) as the best "solution" to this.
// a = b - c * d;
BigDecimal a = b.subtract( c.multiply( d ) )
You might go the route of an expression evaluator. There is a decent (albeit paid) one at http://www.singularsys.com/jep. Antlr has a rudimentary grammar that also does expression evaluation (tho I am not sure how it would perform) at http://www.antlr.org/wiki/display/ANTLR3/Expression+evaluator.
Neither would give you the compile-time safety you would have with true operators. You could also write the various algorithm-based classes in something like Scala, which does support operator overloading out of the box and would interoperate seamlessly with your other Java classes.

Categories

Resources