Java: Vararg method called with explicit subclass array [duplicate] - java

This question already has answers here:
Dealing with an ArrayStoreException
(4 answers)
Closed 4 years ago.
Consider the following example, ignoring the reason one would want to do this:
private static class Original {
public String getValue() {
return "Foo";
}
}
private static class Wrapper extends Original {
private Original orig;
public Wrapper(Original orig) {
this.orig = orig;
}
#Override
public String getValue() {
return orig.getValue();
}
}
public static void test(Original... o) {
if (o != null && o.length > 0) {
for (int i = 0; i < o.length; i++) {
if (o[i] instanceof Wrapper) {
o[i] = ((Wrapper) o[i]).orig; // Throws java.lang.ArrayStoreException at runtime
}
}
}
}
public static void main(String[] args){
test(new Wrapper[] { // Explicitly create an array of subclass type
new Wrapper(new Original())
});
}
This example gives no warnings or errors at compile-time. It seems like the compiler decides that an Wrapper[] contains Wrapper instances, which effectively means that those are definitely instances of Original class. This is perfectly fine.
However, at runtime, the Wrapper[] instance is directly passed into the method. I have thought that it would be smart enough to tear down this array and re-create an instance of Original[] at runtime, but it seems like this is not the case.
Is this behavior ever documented somewhere (like JLS)? An ordinary programmer like me will always assume that I can manipulate that vararg parameter of Original... as if it is an Original[].

Yes, when a Wrapper is an Original, then also a Wrapper[] is an Original[] (it surprised me too when I realized it).
Your Wrapper is a subtype of Original since it exteds the Original class.
And yes, the subtype relationship between the array types may give rise to an ArrayStoreException if the called method tries to store an Original that is not a Wrapper into the passed array. But this is not checked at compile time. It is my understanding that this is exactly why we have the ArrayStoreException type since usually other attempts to store the wrong type into an array are caught at compile time. There is a nice brief example in the documentation of ArrayStoreException. That example also demonstrates that it hasn’t really got anything to do with varargs or method calls, its for all arrays.
The Java language was designed this way from version 1 (which is long before varargs were introduced, BTW). Thanks to Andy Turner for finding the Java Language Specification (JLS) reference: It is in section 4.10.3 Subtyping among Array Types:
If S and T are both reference types, then S[] >_1 T[] iff S >_1 T.

Related

Mechanism of type erasure in Java [duplicate]

This question already has answers here:
Java generics type erasure: when and what happens?
(7 answers)
Create instance of generic type in Java?
(29 answers)
Closed 11 months ago.
I've read Oracle docs on Generics, and some reference books and i still cannot grasp some of the things about Java type erasing. First of all why aren't we allowed to say :
public class Gen<T> {
T obj = new T();
public T getObj() {
return obj;
}
public void setObj(T obj) {
this.obj = obj;
}
}
Why doesnt Java allow me to say new T()? I understand that memory allocation for object of type T is allocated at runtime and type erasure is done in compile time, but when the type erasure is done, all of my T's will be replaced with Objects, so why is this a big deal?
Also how is this type of manipulation with T[] possible :
T[] arr = (T[]) new Object[size];
I just cant wrap my head around this things.
Thanks in advance.
I expected for it to create Object obj = new Object(), and to give me type safety throught the code, like inserting element, or extracting it with some getter. I dont understand why is this not allowed even with type erasure?
All of my T's will be replaced with Objects, so why is this a big deal?
Because T can be something other than Object.
class Gen<T> {
public T obj;
public Gen() { obj = new T(); /* illegal */ }
public Gen(T t) { obj = t; /* legal */ }
// getters and setters are unnecessary complications for this example
}
Gen<Integer> g = new Gen<Integer>();
Integer i = g.obj; // should be safe, but you would make it unsafe
i = i + 5; // uh oh
Gen<Integer> h = new Gen<Integer>(0);
Integer j = h.obj;
j = j + 5;
Type erasure is meant to remove generics while keeping the program the same, in the sense that if you ran the program without doing erasure you would get the same results. When this program is interpreted without erasure, i is an Integer. If we followed your method of type erasure, it would instead get assigned with an Object. So your way of doing it is wrong. Further, since new T() needs to know what T is to work, but erasure removes all runtime knowledge of T, there is in fact no way to compile new T(); while doing erasure, so it's banned. In contrast, the non-erased and erased versions of the h and j sequence do the same operations, so those are allowed.
The thing with the array is a hack and doesn't actually create a T[].
<T> T[] hack(int n) { return (T[])new Object[n]; }
Integer[] is = hack(5); // runtime error
Unchecked casts like (T) or (T[]) are where Java compromises on the "same-behavior" property of erased programs. A non-erased program would fail in hack because the cast would fail. The erased program can't actually perform the cast, so hack succeeds, and the failure is in the variable assignment. As long as an incorrectly cast object is not passed anywhere where the actual type is known, nothing goes wrong. It becomes your responsibility to maintain type safety. The above function, for example, fails to do that. The following example class does it correctly.
class SmallLIFO<T> {
private T[] buf = (T[])new Object[10]; // take responsibility for maintaining type safety
private int used = 0; // the Object[]-pretending-to-be-a-T[] is never given to the user, who may know what a T is and expose the lie
public boolean push(T t) { // this class's public interface only operates on objects that are the right type
boolean ret = used < 10;
if(ret) buf[used++] = t;
return ret;
}
public T pop() {
return used > 0 ? buf[--used] : null; // we'd either need a cast to (T[]) in buf or a cast to (T) here; no avoiding it
}
}
You seem to be saying that since new T() should all be replaced with new Object(), which is a perfectly valid constructor to call. Indeed this is true, but is that the intention of "new T()"?
The purpose of new T() is of course not to create a new Object instance, but to create a new instance of T, whatever that may be. And it is exactly because the JVM doesn't know what T is, that it is impossible to create an instance of T.
Suppose that Java works the way you said it would, and changed all new T() to new Object(), and you have:
public class Foo {
private int x = 10;
public Foo() { System.out.println("Hello"); }
public static <T> T magicallyCreateT() {
return new T();
}
public int getX() { return x; }
}
What would a reasonable person expect if I did this?
Foo foo = Foo.magicallyCreateT();
System.out.println(foo.getX());
From a type-checking perspective, that snippet looks completely normal, doesn't it?
They would expect Hello to be printed, and foo.getX() to return 10, wouldn't they? But the truth is, since the Object constructor is called, not Foo's, no Hello is printed, and since magicallyCreateT returns an instance of Object, you wouldn't even able to call getX on foo! There's no getX method in the Object class! I'd imagine the program would throw a ClassCastException at runtime.
So you see there are lots of problems if you just "create an Object", when you say "I want to create a T", so it is not allowed to do things like new T().
For the case of (T[])new Object[], it is different. You are explicitly saying that you are creating an Object[], and you are casting it to T[]. In the same way, you can also do (T)new Object(). In both cases, you'd get a ClassCastException if something goes wrong later down the line, like the scenario above. In the same way that you can't do new T(), you can't do new T[] either!
Whenever you're casting with a type parameter like this, you're basically telling the compiler that "trust me, I know what I'm doing".

Java dynamic, static casting

import javax.swing.*;
public class Toast {
static Object[] objects = { new JButton(),
new String("Example"), new Object() };
public static void main(String[] args) {
System.out.println( new Count(objects) );
for (Object o : objects)
System.out.println(o);
}
}
class Count {
int b, s, o;
public Count(Object[] objects) {
for (int i=0; i<objects.length; i++)
count(objects[i]);
}
public void count(JButton x) { b++; }
public void count(String x) { s++; }
public void count(Object x) { o++; }
public String toString() {
return b + " : " + s + " : " + o + "\n";
}
}
Above is a piece of code that appears in some form or the other in past exam papers for one of my upcoming tests. The idea of the question is to gauge if you fully understand polymorphism, dynamic and static casting. Basic ideas of OO.
I would like to put out what I think is correct and if people would be able to correct me or add points that would be greatly appreciated.
From what I can see in the above code:
Items are upcast to objects in the object array as every class in java technically inherits from the object class. This is why when count is run it will say there are 3 objects not 1 Jbutton, 1 string and 1 object.
When the enhanced for loop is run the toString of that object type e.g. Example from the string and memory address of the object (not sure what the JButton will print). As this is done at runtime this is known as dynamic casting.
I cannot see any other points that would be relevant to the above bit of code.
The idea behind static cast and dynamic cast is related to the moment a type decision needs to be made. If it needs to be made by the compiler then it's static cast. If the compiler postpones the decision to runtime then it's dynamic cast.
So, your first observation is incorrect. The upcast does not explain the count. Objects do not loose they type but the compiler needs to perform a static cast to decide which method to invoke and it chooses count(Object). There is no dynamic dispatch in java which means that the method called is always decided at compile time.
You second observation is also incorrect. What is in use is polymorphism. In Java, methods are always invoked for the type of the instance and not for the type in the code. Also, there is no dynamic casting here. The compiler can verify all the types. It's just that method invocation is always virtual but that's not a cast.
Actually in this example, I don't see a single case of dynamic casting. The compiler can verify all types. You normally only see dynamic casting when down casting and there is no case of that.
Here's what I would take away:
The compiler implicitly upcasts when performing assignments. This includes assigning to array elements during initialization.
The compiler and JVM do not implicitly downcast when selecting method overloads. The static type of the objects array is Object[], so the count(Object) method will always be called.
The JVM does implicitly "downcast" (in a sense) when invoking a virtual method. The println loop will always invoke the toString method of the actual object instance rather than always invoking Object.toString.
In your Count() method, always count(object) will be called as all objects are up casted to object.
To prevent that you can call method instance of and then downcast the object and call the count
public Count(Object[] objects) {
for (int i=0; i<objects.length; i++)
{
if(objects[i] instanceof JButton)
count((JButton) objects[i]);
else if(objects[i] instanceof String)
count((String) objects[i]);
else
count(objects[i]);
}
}

Can we distinguish the results of diamond operator from raw constructor?

I have some code that I would write
GenericClass<Foo> foos = new GenericClass<>();
While a colleague would write it
GenericClass<Foo> foos = new GenericClass();
arguing that in this case the diamond operator adds nothing.
I'm aware that constructors that actually use arguments related to the generic type can cause a compile time error with <> instead of a run time error in the raw case. And that the compile time error is much better. (As outlined in this question)
I'm also quite aware that the compiler (and IDE) can generate warnings for the assignment of raw types to generics.
The question is instead for the case where there are no arguments, or no arguments related to the generic type. In that case, is there any way the constructed object GenericClass<Foo> foos can differ depending on which constructor was used, or does Javas type erasure guarantee they are identical?
For instantiations of two ArrayLists, one with the diamond operator at the end and one without...
List<Integer> fooList = new ArrayList<>();
List<Integer> barList = new ArrayList();
...the bytecode generated is identical.
LOCALVARIABLE fooList Ljava/util/List; L1 L4 1
// signature Ljava/util/List<Ljava/lang/Integer;>;
// declaration: java.util.List<java.lang.Integer>
LOCALVARIABLE barList Ljava/util/List; L2 L4 2
// signature Ljava/util/List<Ljava/lang/Integer;>;
// declaration: java.util.List<java.lang.Integer>
So there wouldn't any difference between the two as per the bytecode.
However, the compiler will generate an unchecked warning if you use the second approach. Hence, there's really no value in the second approach; all you're doing is generating a false positive unchecked warning with the compiler that adds to the noise of the project.
I've managed to demonstrate a scenario in which doing this is actively harmful. The formal name for this is heap pollution. This is not something that you want to occur in your code base, and any time you see this sort of invocation, it should be removed.
Consider this class which extends some functionality of ArrayList.
class Echo<T extends Number> extends ArrayList<T> {
public Echo() {
}
public Echo(Class<T> clazz) {
try {
this.add(clazz.newInstance());
} catch (InstantiationException | IllegalAccessException e) {
System.out.println("YOU WON'T SEE ME THROWN");
System.exit(-127);
}
}
}
Seems innocuous enough; you can add an instance of whatever your type bound is.
However, if we're playing around with raw types...there can be some unfortunate side effects to doing so.
final Echo<? super Number> oops = new Echo(ArrayList.class);
oops.add(2);
oops.add(3);
System.out.println(oops);
This prints [[], 2, 3] instead of throwing any kind of exception. If we wanted to do an operation on all Integers in this list, we'd run into a ClassCastException, thanks to that delightful ArrayList.class invocation.
Of course, all of that could be avoided if the diamond operator were added, which would guarantee that we wouldn't have such a scenario on our hands.
Now, because we've introduced a raw type into the mix, Java can't perform type checking per JLS 4.12.2:
For example, the code:
List l = new ArrayList<Number>();
List<String> ls = l; // Unchecked warning
gives rise to a compile-time unchecked warning, because it is not
possible to ascertain, either at compile time (within the limits of
the compile-time type checking rules) or at run time, whether the
variable l does indeed refer to a List<String>.
The situation above is very similar; if we take a look at the first example we used, all we're doing is not adding an extra variable into the matter. The heap pollution occurs all the same.
List rawFooList = new ArrayList();
List<Integer> fooList = rawFooList;
So, while the byte code is identical (likely due to erasure), the fact remains that different or aberrant behavior can arise from a declaration like this.
Don't use raw types, mmkay?
The JLS is actually pretty clear on this point. http://docs.oracle.com/javase/specs/jls/se8/html/jls-8.html#jls-8.1.2
First it says "A generic class declaration defines a set of parameterized types (§4.5), one for each possible parameterization of the type parameter section by type arguments. All of these parameterized types share the same class at run time."
Then it gives us the code block
Vector<String> x = new Vector<String>();
Vector<Integer> y = new Vector<Integer>();
boolean b = x.getClass() == y.getClass();
and says that it "will result in the variable b holding the value true."
The test for instance equality (==) says that both x and y share exactly the same Class object.
Now do it with the diamond operator and without.
Vector<Integer> z = new Vector<>();
Vector<Integer> w = new Vector();
boolean c = z.getClass() == w.getClass();
boolean d = y.getClass() == z.getClass();
Again, c is true, and so is d.
So if, as I understand, you're asking whether there is some difference at runtime or in the bytecode between using the diamond and not, the answer is simple. There is no difference.
Whether it's better to use the diamond operator in this case is a matter of style and opinion.
P.S. Don't shoot the messenger. I would always use the diamond operator in this case. But that's just because I like what the compiler does for me in general w/r/t generics, and I don't want to fall into any bad habits.
P.P.S. Don't forget that this may be a temporary phenomenon. http://docs.oracle.com/javase/specs/jls/se8/html/jls-4.html#jls-4.8 warns us that "The use of raw types in code written after the introduction of generics into the Java programming language is strongly discouraged. It is possible that future versions of the Java programming language will disallow the use of raw types."
You may have problem with default constructor if your generic arguments are limited. For example, here's sloppy and incomplete implementation of the list of numbers which tracks the total sum:
public class NumberList<T extends Number> extends AbstractList<T> {
List<T> list = new ArrayList<>();
double sum = 0;
#Override
public void add(int index, T element) {
list.add(index, element);
sum += element.doubleValue();
}
#Override
public T remove(int index) {
T removed = list.remove(index);
sum -= removed.doubleValue();
return removed;
}
#Override
public T get(int index) {
return list.get(index);
}
#Override
public int size() {
return list.size();
}
public double getSum() {
return sum;
}
}
Omitting the generic arguments for default constructor may lead to ClassCastException in runtime:
List<String> list = new NumberList(); // compiles with warning and runs normally
list.add("test"); // throws CCE
However adding the diamond operator will produce a compile-time error:
List<String> list = new NumberList<>(); // error: incompatible types
list.add("test");
In your specific example: Yes, they are identical.
Generally: Beware, they may not be!
The reason is that different overloaded constructor/method may be invoked when raw type is used; it is not only that you get better type safety and avoid runtime ClassCastException.
Overloaded constructors
public class Main {
public static void main(String[] args) {
Integer anInteger = Integer.valueOf(1);
GenericClass<Integer> foosRaw = new GenericClass(anInteger);
GenericClass<Integer> foosDiamond = new GenericClass<>(anInteger);
}
private static class GenericClass<T> {
public GenericClass(Number number) {
System.out.println("Number");
}
public GenericClass(T t) {
System.out.println("Parameter");
}
}
}
Version with diamond invokes the different constructor; the output of the above program is:
Number
Parameter
Overloaded methods
public class Main {
public static void main(String[] args) {
method(new GenericClass());
method(new GenericClass<>());
}
private static void method(GenericClass<Integer> genericClass) {
System.out.println("First method");
}
private static void method(Object object) {
System.out.println("Second method");
}
private static class GenericClass<T> { }
}
Version with diamond invokes the different method; the output:
First method
Second method
This is not a complete answer - but does provide a few more details.
While you can not distinguish calls like
GenericClass<T> x1 = new GenericClass<>();
GenericClass<T> x2 = new GenericClass<T>();
GenericClass<T> x3 = new GenericClass();
There are tools that will allow you to distinguish between
GenericClass<T> x4 = new GenericClass<T>() { };
GenericClass<T> x5 = new GenericClass() { };
Note: While it looks like we're missing new GenericClass<>() { }, it is not currently valid Java.
The key being that type information about the generic parameters are stored for anonymous classes. In particular we can get to the generic parameters via
Type superclass = x.getClass().getGenericSuperclass();
Type tType = (superclass instanceof ParameterizedType) ?
((ParameterizedType) superclass).getActualTypeArguments()[0] :
null;
For x1, x2, and x3 tType will be an instance of TypeVariableImpl (the same instance in all three cases, which is not surprising as getClass() returns the same object for all three cases.
For x4 tType will be T.class
For x5 getGenericSuperclass() does not return an instance of ParameterizedType, but instead a Class (infact GenericClass.class)
We could then use this to determine whether our obect was constructed via (x1,x2 or x3) or x4 or x5.

Issue with Java varargs and generics in abstract classes

I'm playing with some functional like programming. And having issues with some pretty deeply nested generics. Here's my SCCE that fails, with an abstract class involved:
public abstract class FooGen<IN, OUT> {
OUT fn2(IN in1, IN in2) { // clever? try at a lazy way, just call the varargs version
return fnN(in1, in2);
}
abstract OUT fnN(IN...ins); // subclasses implement this
public static void main(String[] args) {
FooGen<Number, Number> foogen = new FooGen<Number, Number>() {
#Override Number fnN(Number... numbers) {
return numbers[0];
}
};
System.out.println(foogen.fn2(1.2, 3.4));
}
}
This dies with a
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Ljava.lang.Number;
However, for a non-abstract FooGen, it works fine:
public class FooGen<IN, OUT> {
OUT fn2(IN g1, IN g2) {
return fnN(g1, g2);
}
OUT fnN(IN...gs) {
return (OUT)gs[0];
}
public static void main(String[] args) {
FooGen<Number,Number> foogen = new FooGen<Number,Number>();
System.out.println(foogen.fn2(1.2, 3.4));
}
}
This prints 1.2. Ideas? It seems like somewhere Java has lost track of the generics. This is pushing the limits of my generics knowledge. :-)
(Added in response to answers)
First, thanks for the upvotes, and to Paul and Daemon for their helpful answers.
Still wondering why it works as Numbers in the 2nd version, I had an insight. As a Thought Experiment, let's add a .doubleValue() somewhere. You can't. In the code itself the variables are INs, not Numbers. And in the main() it's merely declaring the type, FooGen<Number,Number> but there's no place there to add code.
In Version #2, it really isn't "working" as Numbers. Internally, with erasure, everything is Objects, as explained by Paul and Daemon, and, looking back sheepishly, well understood by myself. Basically, in this complex example, I got overexcited and mislead by the <Number> declaration.
Don't think I'll bother with a workaround. The whole idea was to be lazy. :-) For efficiency I created parallel interfaces and code that take primitive doubles (and ints), and there this trick works just fine.
Varargs parameters are first and foremost arrays. So without the syntactic sugar, your code would look like the following:
OUT fn2(IN in1, IN in2) {
return fnN(new IN[] {in1, in2});
}
abstract OUT fnN(IN[] ins);
Except new IN[] would not be legal because arrays of type parameters cannot be instantiated, due to type erasure. An array needs to know its component type, but IN has been erased to its upper bound, Object, at runtime.
The varargs invocation hides this issue unfortunately, and at runtime you have the equivalent of fnN(new Object[] {in1, in2}), whereas fnN has been overriden to take a Number[].
However, for a non-abstract FooGen, it works fine
This is because by instantiating FooGen directly, you haven't overridden fnN. Thus it accepts an Object[] at runtime and no ClassCastException occurs.
For example, this will fail even if FooGen isn't abstract:
FooGen<Number, Number> foogen = new FooGen<Number, Number>() {
#Override
Number fnN(Number... gs) {
return super.fnN(gs);
}
};
System.out.println(foogen.fn2(1.2, 3.4));
So you can see that it really isn't related to the abstractness of FooGen, but to whether fnN gets overridden with a narrowed argument type.
SOLUTION
There are no easy workarounds. One idea is to have fnN take a List<? extends IN> instead:
OUT fn2(IN in1, IN in2) {
//safe because the array won't be exposed outside the list
#SuppressWarnings("unchecked")
final List<IN> ins = Arrays.asList(in1, in2);
return fnN(ins);
}
abstract OUT fnN(List<? extends IN> ins);
If you wanted to keep the varargs support, you could treat this method as an implementation detail and delegate to it:
abstract OUT fnNImpl(List<? extends IN> ins);
public final OUT fnN(IN... ins) {
return fnNImpl(Arrays.asList(ins));
}
This ClassCastException occurs due to a feature of Java called "type erasure". Type erasure occurs when generics are compiled. Since the Java compiler cannot know the type of a generic class at run-time, it will instead compile the generic objects as instances of Object.
In your code, when FooGen is compiled, fnN(IN... ins) receives a parameter of type Object[]. The ClassCastException occurs when you then attempt to down-cast one of these Objects to your generic type OUT.
This isn't even mentioning the fact that creation of such "generic arrays" is prohibited in Java regardless.
Here is a quote from Angelika Langer's Java Generics FAQ:
Here is another example that illustrates the potential danger of
ignoring the warning issued regarding array construction in
conjunction with variable argument lists.
Example (of a varargs method and its invocation):
public final class Test {
static <T> T[] method_1(T t1, T t2) {
return method_2(t1, t2); // unchecked warning
}
static <T> T[] method_2( T... args) {
return args;
}
public static void main(String... args) {
String[] strings = method_1("bad", "karma"); // ClassCastException
}
}
warning: [unchecked] unchecked generic array creation of type T[] for
varargs parameter
return method_2(t1, t2);
^
In this example the first method calls a second method and the second
method takes a variable argument list. In order to invoke the varargs
method the compiler creates an array and passes it to the method. In
this example the array to be created is an array of type T[] , that
is, an array whose component type is a type parameter. Creation of
such arrays is prohibited in Java and you would receive an error
message if you tried to create such an array yourself.

Generic array creation in java [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Create instance of generic type in Java?
Java how to: Generic Array creation
I am trying to create a create a class of generic type. This is my class file.
public class TestClass<T> implements AbstractDataType<T>{
T[] contents;
public TestClass(int length) {
this.contents = (T[])new Object[length];
}
}
But the contents have just have the methods inherited from the Object class. How can I create an abstract array for contents ?
As far as initializing contents, I think what you have is the best you can do. If there way a way, ArrayList would probably do it (line 132: http://www.docjar.com/html/api/java/util/ArrayList.java.html)
But when you say "the contents have just have the methods inherited from the Object class", I'm assuming you mean that you can only access methods like toString and equals when you are working with a T instance in your code, and I'm guessing this is the primary problem. That's because you're not telling the compiler anything about what a T instance is. If you want to access methods from a particular interface or type, you need to put a type constraint on T.
Here's an example:
interface Foo {
int getSomething();
void setSomethingElse(String somethingElse);
}
public class TestClass<T extends Foo> implements AbstractDataType<T> {
T[] contents;
public TestClass(int length) {
this.contents = (T[])new Object[length];
}
public void doSomethingInteresting(int index, String str) {
T obj = contents[index];
System.out.println(obj.getSomething());
obj.setSomethingElse(str);
}
}
So now you can access methods other than those inherited from Object.
You cannot create a generic array in Java.
As stated in the Java Language Specification, the mentioned rules state that "The rules above imply that the element type in an array creation expression cannot be a parameterized type, other than an unbounded wildcard."
I believe that in any method that accesses contents, you need to cast them as type T. The main reasoning for this is that as an Object array, Java looks at the contents as Objects to fit them in. So while contents might be an array of T, it still is just an array of type Object.
How do you think ArrayList.toArray and Arrays.copyOf do it?
See Array.newInstance.
public TestClass(Class<T> type, int length) {
this.contents = Array.newInstance(type, length);
}

Categories

Resources