Can I override any methods of array?
For example toString() or other methods.
import java.lang.reflect.Method;
public class ArraysClassTest {
static int[] array = { 1, 2, 3, 1 };
public static void main(String[] args) {
Class<? extends int[]> class1 = array.getClass();
try {
Method method = class1.getMethod("toString");
} catch (NoSuchMethodException | SecurityException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
You can't change any features of arrays. JLS §10.7 Array Members specifies every member of an array:
The members of an array type are all of the following:
The public final field length, which contains the number of components of the array. length may be positive or zero.
The public method clone, which overrides the method of the same name in class Object and throws no checked exceptions. The return type of the clone method of an array type T[] is T[].
A clone of a multidimensional array is shallow, which is to say that it creates only a single new array. Subarrays are shared.
All the members inherited from class Object; the only method of Object that is not inherited is its clone method.
The specification doesn't allow any way of customizing this implementation. An array's toString() method, for example, is always the basic one inherited from Object.
To create an array object the compiler emits one of three instructions into the compiled Java bytecode: newarray for primitives, anewarray for reference types, or multinewarray for all multidimensional arrays. In implementing those instructions, the virtual machine creates each array class as needed at runtime (JVMS §5.3.3 Creating Array Classes). The VM also defines dedicated bytecode instructions for the compiler to use for getting and setting elements of arrays and getting an array's length.
How the arrays are implemented within the VM is not specified whatsoever. It is purely an implementation detail, and even the Java compiler doesn't know, or care. The actual code involved depends on the flavor of virtual machine you're running your program on, the version of that VM, the OS and CPU it's running on, and any relevant runtime options the VM is configured with (e.g., whether in interpreted mode or not).
A quick look over the OpenJDK 8 source code turns up some of the relevant machinery for arrays:
src/share/vm/oops/arrayKlass.cpp
src/share/vm/oops/objArrayKlass.cpp
src/share/vm/oops/typeArrayKlass.cpp
src/share/vm/interpreter/bytecodeInterpreter.cpp – implements bytecode instructions for the interpreter, including instructions for creating and accessing arrays. It's tortuous and intricate, however.
src/share/vm/c1/c1_RangeCheckElimination.cpp – performs some clever array bounds check eliminations when compiling from bytecode to native code.
As arrays are a core feature of the language and the VM, it's impossible to point to any one source file and say "here, this is the class Array code". Arrays are special, and the machinery that implements them is literally all over the place.
If you want to customize the behavior of an array, the only thing you can do is not use the array directly, but use, subclass, or write, a collection class that internally contains the array. That gives you complete freedom to define the class's behavior and performance characteristics. However, it is impossible to make a custom class be an array in the Java language sense. That means you can't make it implement the [] operator or be passable to a method that expects an array.
In Java, all arrays (including those of primitive types) have java.lang.Object as their base class. (For one thing this is how zero length arrays can be modelled).
Although it's possible to override any method in that base class, Java itself specifies the form of the array. You are not able to interfere with that: in particular you can't extend an array.
To answer your direct question: no, you can't.
Arrays are a "compiler" construct - the compiler knows what String[] means; and it creates the corresponding byte code out of that. You can only create array objects, but not "new array classes". Or beyond that, the JVM knows what to do about "array using" bytecode instructions.
In other words: the source code that defines the behavior of Array-of-something objects is completely out of your control. Arrays just do what arrays do; no way for you to interfere with that.
And to get to your implicit question why things are this way:
Sometimes there isn't much to understand; but simply to accept. Thing is that the Java language was created almost 20+ years ago; and at some point, some folks made some design choices. Many of them were excellent; some of them might have been handled if we would redo things nowadays.
You will find for example, that Scala has a different way of dealing with arrays. But for java, things are as they are; and especially for things that are "so core" to the language as arrays, there is simply no sense in changing any of that nowadays.
You can create a proxy and use it in place of the original object
final int[] array = { 1, 2, 3, 1 };
Object proxy = Proxy.newProxyInstance(array.getClass().getClassLoader(), array.getClass().getInterfaces(), new InvocationHandler() {
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
StringBuilder b=new StringBuilder("the array is");
for(int i:array)
b.append(" ").append(i);
return b.toString();
}
});
System.out.println(proxy.toString());
the output of the above is "the array is 1 2 3 1".
Related
I’m storing references to BiConsumers<Integer, X> adapted to Consumer<Integer>:
public void setConsumer(BiConsumer<Integer, X> consumer) {
fieldConsumer = integer -> consumer.accept(integer, fieldSubject);
}
But I need 2 of them, so I changed the code to use an array:
private Consumer<Integer>[] fieldConsumers;
public MyClass(int numberOfConsumers) {
Consumer<Integer> consumer = integer -> {};
fieldConsumers= (Consumer<Integer>[]) Array.newInstance(consumer.getClass(), numberOfObservers);
}
public void addConsumer(int consumerIndex, BiConsumer<Integer, X> consumer) {
// Offending line
fieldConsumers[consumerIndex] = responseType-> consumer.accept(responseType, fieldSubject);
}
So that the callback can be triggered with a:
for (Consumer<Integer> consumer: fieldConsumers) {
consumer.accept(responseType);
}
I got the error:
java.lang.ArrayStoreException:
on this line:
fieldConsumers[consumerIndex] = responseType-> consumer.accept(responseType, fieldSubject);
Now, If you are still reading this, I have one more question:
Am I still holding reference to outside Consumers if I do it this way, as opposed to using the old fieldConsumers.add(consumer) where fieldConsumers is a List<BiConsumer<Integer, X>> ?
You used Array.newInstance(consumer.getClass(), numberOfObservers) to create the Consumer<Integer>[] array. But consumer.getClass() returns the actual class of the object you’re invoking the method on, which is always an implementation class of the interface. An array of this element type can only hold objects of the same concrete class, not arbitrary implementations of the interface.
This is not different to, e.g.
CharSequence cs = "hello";
CharSequence[] array = (CharSequence[]) Array.newInstance(cs.getClass(), 1);
array[0] = new StringBuilder();
Here, cs has the type CharSequence and the reflective array creation appears to create an array of type CharSequence[], so storing a StringBuilder should be possible. But since cs.getClass() returns the actual implementation class String, the array is actually of type String[], hence, the attempt to store a StringBuilder produces an ArrayStoreException.
In case of lambda expressions, things get slightly more complicated, as the actual implementation classes of the functional interface are provided at runtime and intentionally unspecified. You used the lambda expression integer -> {} for the array creation in the constructor, which evaluated to a different implementation class than the responseType-> consumer.accept(responseType, fieldSubject) within the addConsumer method, in this particular runtime.
This behavior is in line with this answer describing the behavior of the most commonly used environment. Still, other implementations could exhibit different behavior, e.g. evaluate to the same implementation class for a particular functional interface for all lambda expressions. But it’s also possible that multiple evaluations of the same lambda expression produce different classes.
So the fix is to use the intended interface element type, e.g.
fieldConsumers=(Consumer<Integer>[])Array.newInstance(Consumer.class, numberOfObservers);
But there is no need for a reflective array creation at all. You can use:
fieldConsumers = new Consumer[numberOfObservers];
You can not write new Consumer<Integer>[numberOfObservers], as generic array creation is not allowed. That’s why the code above uses a raw type. Using Reflection instead wouldn’t improve the situation, as it is an unchecked operation in either case. You might have to add #SuppressWarnings for it. The cleaner alternative is to use a List<Consumer<Integer>>, as it shields you from the oddities of arrays and generics.
It’s not clear what you mean with “reference to outside Consumers” here. In either case, you have references to Consumer implementations capturing references to BiConsumer implementations you received as arguments to addConsumer.
I learned that Java's type system follows a broken subtyping rule in that it treats arrays as covariant. I've read online that if a method's argument will be read from and modified, the only type-safe option is invariance which makes sense and we can provide some simple examples of that in Java.
Is Java's patch of this rule by dynamically type checking the type of an object being stored noticeable in terms of performance? I can't imagine that it would be more than one or two additional instructions to check the type of the object. A follow up question is, ignoring any performance differences at runtime, is this equivalent to having a non-broken subtyping rule for arrays? Forgive me if my questions are elementary!
I found an article that seems to answer your question:
"Java provides three different ways to find the type of object at runtime: instanceof keyword, getClass() and isInstance() method of java.lang.Class. Out of all three only getClass() is the one which exactly find Type of object while others also return true if Type of object is the super type."
From this it seems that you should be able to write MyObject.getClass() and that will return the objects class. Or you could use MyObject.isInstance(TheObject) and this will return true if MyObject is TheObject. Thirdly you should be able to:
if (MyObject instanceof TheObject) {
//your code
}
Here is the link to the web page
Also here is a link to another post which may help clarify:
Java isInstance vs instanceOf operator
Also here are two more links to other similar questions:
How to determine an object's class (in Java)?
java - How do I check if my object is of type of a given class?
Performance is always a tricky subject. Depending on the context, such checks may be optimized out completely. For example:
public static void main(String[] args){
Object[] array = new String[2];
array[0] = "Hello, World!";//compiler knows this is safe
System.out.println(array[0]);
array[1] = new Object();//compiler knows this will throw
}
Here, the compiler has access to the actual type of the array during both assignments, so the run-time checks are not strictly necessary (if the compiler is clever enough, it can optimize them out).
In this example, however, a run-time check is necessary:
public static void main(String[] args){
Object[] array = Math.random()<.5? new String[2]: new Object[2];
array[0] = "Hello, World!";//compiler knows this is safe
System.out.println(array[0]);
array[1] = new Object();//compiler must check array type
}
Things gets even more complex when you consider the mind-bending just-in-time optimizations that can take place! Although overall, yes, as with many of Java's safety features, there is a necessary performance cost. Whether or not it's noticeable will depend on your use-case.
As for the equivalence question: No, this is not the same as having invariant arrays. Invariant arrays would make Object[] array = new String[2]; a compile-time error.
Consider the following:
String[] array = {1,2,3,4};
myFunction(array);
public void myFunction(String[] array){
//some task here
}
I had to answer this question today.
How are arrays passed to a function? Means what is the underlying technique?
When I failed to answer, I was told the following.
The address of first element is passed and other consecutive elements are obtained from the first element's address by adding some x bytes.
Does this happen in every programming language or in just c and c++?
Thank you!
No, in Java for example arrays are objects. They are passed like any other object is passed to a method: the method would take a reference to the array object as a whole, and not a reference to the first element.
Taken from this page:
All class and array types inherit (§8.4.8) the methods of class Object
Java has no concept of "pointers", in the same sense as C or C++ (addresses in memory), i.e. an object reference does not really point to the memory location where the object is stored.
In theory, every language is different. However:
In C, a function cannot take an array as an argument. When
you declare an array parameter, the type is automatically
converted into a pointer, so there is no different between
void f( int a[5] ) and void f( int* ). (This is often
summarized by saying that arrays are not first class objects.)
For reasons of C compatibility, C++ follows the same rules, but
in C++, you wouldn't normally pass an array as a parameter
anyway, and if you did, you would pass it by reference, where
this conversion to pointer doesn't occur. (I.e.
void f( int (&a)[5] ) is not the same as void f( int* &a ).)
In Java, and a number of other recent languages, everything
(or almost), including arrays are objects, and parameters,
variables, etc. are pointers to those objects. So in Java, you
pass a pointer to the array, but the full array object, with
all of the information about its size, etc. Sort of like
passing an std::vector<int>* in C++.
In a lot of languages (mostly older?), like Pascal and
languages of the Modula family, and array is an object type just
like any other. If you don't take any particular actions, an
array will be passed by value, with a complete copy of the
array.
And in the earliest languages, like Fortran or Algol, each
language often had its own very particular ways of passing
arrays, although in general, they followed the same rules as other
types. (Some early languages like Cobol or Basic, didn't even
support passing arguments to functions, at least in their
earliest variants.)
Amongst the languages you're likely to see today, I think that
the Java model predominates. C remains an outlier, and C++
gives you the choice: you can pass an std::vector by value or
by reference (but reference is recommended for performance
reasons).
This question already has answers here:
length and length() in Java
(8 answers)
Closed 10 years ago.
Why is String.length() a method, and int[].length a property (see below)?
int[] nums = {2,4,7,12,43};
String phrase = "Hello, world.";
System.out.length(nums.length);
System.out.length(phrase.length());
I don't think there has to be a good reason, and I think there could be many reasons.
But one is that by making String#length() a property, it can be declared in an interface instead (in this case CharSequence). Interfaces cannot declare public instance fields.
This is what the String::length() function looks like:
public int length() {
return count;
}
So essentially count could've been called length and made public to be similar to arrays (it is final after all).
It was probably just a design decision. There may have been some contributing factors that we can speculate about (one of which could've been the CharSequence thing mentioned by Mark Peters).
Because String is not an array as such. The designers of Java designed arrays (which are objects) to have a public field named length.
On the other hand, a String has a method which gives the length instead. In general it is a more conventional approach to make member fields private and use methods to access them, but in the case of arrays it is not.
They're different objects with different signatures as far as you are concerned. A String is not a char[] (although internally it might be implemented that way).
No particular reason, I think. In fact in C#, a very similar language, String.length is a property http://msdn.microsoft.com/en-us/library/system.string.length.aspx. But take a look at what C# designer has to say about this design:
The Length property returns the number of Char objects in this instance, not the number of Unicode characters.
The reason is that a Unicode character might be represented by more than one Char. Use the
System.Globalization.StringInfo class to work with each Unicode character instead of each Char.
Why int[].length a property?
Arrays are special objects in java, they have a simple attribute named length which is final.
There is no "class definition" of an array (you can't find it in any .class file), they're a part of the language itself.
The public final field length, which contains the number of components of the array. length may be positive or zero.
The public method clone, which overrides the method of the same name in class Object and throws no checked exceptions. The return type of the clone method of an array type T[] is T[].
A clone of a multidimensional array is shallow, which is to say that it creates only a single new array. Subarrays are shared.
All the members inherited from class Object; the only method of Object that is not inherited is its clone method.
Resource: JSL 10.7
Why String.length() a method?
Why is it that, if you have, let's say, these functions:
void func1(Object o){
//some code
}
void func1(Object[] o){
//some code
}
You can call, for example:
func1("ABC");
but not :
func1({"ABC", "DEF"}); // instead having to write:
func1(new Object[]{"ABC", "DEF"});
Question: Is there any special reason why the constructor needs to be called on arrays ?
The "array initialiser" is only available for declarations / assignments:
Object[] o = { 1, 2 };
Or for "array creation expressions":
new Object[] { 1, 2 };
Not for method calls:
// Doesn't work:
func1({1, 2});
It's the way it is... You can read about it in the JLS, chapter 10.6. Array Initializers. An extract:
An array initializer may be specified in a declaration (§8.3, §9.3, §14.4), or as part of an array creation expression (§15.10), to create an array and provide some initial values.
Apart from it not being defined in the JLS right now, there seems to be no reason why a future Java version wouldn't allow array initialisers / array literals to be used in other contexts. The array type could be inferred from the context in which an array literal is used, or from the contained variable initialisers
Of course, you could declare func1 to have a varargs argument. But then you should be careful about overloading it, as this can cause some confusion at the call-site
There was a suggestion that Java SE 5.0 was going to have an array literal notation. Unfortunately, we got varargs instead, with all the fun that goes with that.
So to answer the question of why, the language is just like that. You may see list literals in a later version of Java.
You are trying to perform inline array initialization which Java doesn't really support yet.
I suppose you could achieve the desired result using varargs if you so wished, but if you need to pass in an array to a method, you have to initialise it the way Java likes an array to be initialised.
When you call func1("ABC") an object of type String with value"ABC" is created automatically by java.For creating any other object other than of type String you need to used the new operator.