I have classes that for processing primitive array input: CharArrayExtractor for char[], ByteArrayExtractor for byte[], IntegerArrayExtractor for int[], ...
public void CharArrayExtractor {
public List<Record> extract(char[] source) {
List<Record> records = new ArrayList<Record>();
int recordStartFlagPos = -1;
int recordEndFlagPos = -1;
for (int i = 0; i < source.length; i++) {
if (source[i] == RECORD_START_FLAG) {
recordStartFlagPos = i;
} else if (source[i] == RECORD_END_FLAG) {
recordEndFlagPos = i;
}
if (recordStartFlagPos != -1 && recordEndFlagPos != -1) {
Record newRecord = makeRecord(source, recordStartFlagPos,
recordEndFlagPos);
records.add(newRecord);
recordStartFlagPos = -1;
recordEngFlagPos = -1;
}
}
}
}
public void ByteArrayExtractor {
public List<Record> extract(byte[] source) {
// filter and extract data from the array.
}
}
public void IntegerArrayExtractor {
public List<Record> extract(int[] source) {
// filter and extract data from the array.
}
}
The problem here is that the algorithm for extracting the data is the same, only the types of input are different. Everytime the algorithm changes, I have to change all of the extractor classes.
Is there a way to make extractor classes more "generics"?
Best regards.
EDIT: It seems that every suggestion so far is to use autoboxing to archive generic. But the number of elements of the array is often large, so I avoid using autoboxing.
I added more specific implementation of how the data is being extracted. Hope it will clarify something.
New Idea
Or a different approach is wrapping the primitive arrays and covering them with the methods you use for your algorithm.
public PrimitiveArrayWrapper {
private byte[] byteArray = null;
private int[] intArray = null;
...
public PrimitiveArrayWrapper(byte[] byteArray) {
this.byteArray = byteArray;
}
// other constructors
public String extractFoo1(String pattern) {
if(byteArray != null) {
// do action on byteArray
} else if(....)
...
}
}
public class AlgorithmExtractor {
public List<Record> do(PrimitiveArrayWrapper wrapper) {
String s= wrapper.extractFoo1("abcd");
...
}
}
This mainly depends if you have a lot of methods to call which you would have to cover. but at least you must not edit the algorithm more over the way how to access the primitive array. Furthermor you would also be able to use a different object inside the wrapper.
Old Idea
Either use generics or what i also think about is to have three methods which convert the primitive types into value types.
public void Extractor {
public List<Record> extract(byte[] data) {
InternalExtractor<Byte> ie = new InternalExtractor<Byte>();
return ie.internalExtract(ArrayUtils.toObject(data));
}
public List<Record> extract(int[] data) {
...
}
}
public void InternalExtractor<T> {
private List<Record> internalExtract(T[] data) {
// do the extraction
}
}
ArrayUtils is a helper class from commons lang from Apache.
I'm not sure how your filter will work as it will not know anything about the type the array contains.
Using reflection you can possibly do what you want but you will loose compile time type safety.
The java.lang.reflect.Array class provides functions for manipulating an array without knowing its type.
The Array.get() function will return the value at the requested index of the array and if it is a primitive wrap it in its corresponding Object type. The downside is you have to change your method signature to accept Object instead of specific array types which means the compiler can no longer check the input parameters for you.
Your code would become:
public class ArrayExtractor {
public List<Record> extract(Object array) {
// filter and extract data from the array.
int length = Array.getLength(array);
for (int index = 0; index < length; index++) {
Object value = Array.get(array, index);
// somehow filter using value here
}
}
}
Personally I would prefer having type safety over using reflection even if it is a little more verbose.
interface Source
int length();
int get(int index);
extract(final byte[] source)
extract( new Source(){
int length(){ return source.length; }
int get(int i){ return source[i]; }
} );
// common algorithm
extract(Source source)
for(int i=0; i<source.lenth(); i++)
int data = source.get(i);
...
Instead of passing each type, pass the class of the type as the below:
public List<Record> extract(Class srcClass) {
if (int[].class.equals(srcClass) {
// filter and extract data from the int[] array.
}
else if (..) // do same for other types
}
public void Extractor<T> {
public List<Record> extract(T[] source) {
// filter and extract data from the array.
}
}
http://download.oracle.com/javase/tutorial/extra/generics/methods.html
You could do something like this.
public class ArrayExtractor<T>
{
public List<T> extract (T[] source)
{
// filter and extract data from the array.
}
}
You would have a generic Extractor class and your implementation would be the same.
You cant use Javas generics because of primitive type source, your best bet is to try some java reflection api to analyze the incoming source and invoke the extractors on your own.
I think is is possible to do create a method like this:
public List<Record> extract(List<Number> source) {
// filter and extract data from the array.
}
And use Arrays.asList(yourPrimaryArrayType), to make it compatible.
After my tests and the comment of Sean Patrick Floyd, you will be able to do this by create once some helper methods, for converting primitive arrays to lists:
public static void main(String[] args)
{
int[] i = {1,2,3};
System.out.println(extract(asPrimitiveList(i)));
}
public static List<Object> extract(List<Number> source) {
List<Object> l = new ArrayList<Object>();
l.add(0);
for (Number n : source)
{
// I know this line is rubbish :D
l.set(0, ((Number) l.get(0)).doubleValue() + n.doubleValue());
}
return l;
}
private static List<Number> asPrimitiveList(int[] ia)
{
List<Number> l = new ArrayList<Number>(ia.length);
for (int i = 0; i < ia.length; ++i)
{
l.add(ia[i]);
}
return l;
}
private static List<Number> asPrimitiveList(byte[] ia)
{
List<Number> l = new ArrayList<Number>(ia.length);
for (int i = 0; i < ia.length; ++i)
{
l.add(ia[i]);
}
return l;
}
private static List<Number> asPrimitiveList(char[] ia)
{
List<Number> l = new ArrayList<Number>(ia.length);
for (int i = 0; i < ia.length; ++i)
{
l.add(ia[i]);
}
return l;
}
No, it is never possible.
For example take a look at documentation of ArrayUtils.copyOf(byte[] original, int newLength). And there exists other methods for remaining primitives. This is kind of same behavior (literally) you wanted. If it was possible similar code should exists somewhere else.
Additionally we can discuss more about how generic works, but it would be another issue, i guess.
Depends on what you're trying to achieve. But maybe you can work with primitive wrappers instead? Then you could write generic Extractor<? extends Number> (here Number is the abstract class extended by all primitive wrappers).
Yes, you should be able to use generics:
interface Extractor<T, R> {
public List<R> extract(T source);
}
class BaseExtractor<T> implements Extractor<T, Record>
{
public List<Record> extract(T source)
{
//do your thing
}
}
Here, you would have to assume that T is a primitive array, as you cannot use primitives in generic definitions.
Or else, you could use the wrapper Objects and do it this way:
interface Extractor<T, R> {
public List<R> extract(T[] source);
}
class BaseExtractor<T> implements Extractor<T, Record>
{
public List<Record> extract(T[] source)
{
//do your thing
}
}
In this case, your generic T can be Byte, Integer, etc.
Related
I have a programming assignment to make a generic stack in Java and I need to make a deep copy of newNode T. I don't know how to make a method deep Copy that can access its self and output i'`s deep copy. So far, I have this:
public class Stack<T>
{
private T[] data;
private int top;
private int size;
public Stack( )
{ top = -1;
size = 100;
data = (T[])new Object[100];
}
public Stack(int n)
{ top = -1;
size = n;
data = (T[])new Object[n];
}
public boolean push(T newNode)
{ if(top == size-1)
return false; // ** overflow error **
else
{ top = top +1;
data[top] = newNode.deepCopy();
return true; // push operation successful
}
}
public T pop( )
{ int topLocation;
if(top == -1)
return null; // ** underflow error **
else
{ topLocation = top;
top = top -1;
return data[topLocation];
}
}
public void showAll( )
{ for(int i = top; i >= 0; i--)
System.out.println(data[i].toString());
}
}
How can I make the deep copy of newNode. I'm pretty sure I need an interface for the method but past that I`m lost.
Perhaps the most general and straight forward solution would consist in asking the using code to provide the deep-copying routine at construction:
public class Stack<T> {
...
private final Function<T, T> elementCopier;
public Stack<T>(Function<T, T> elementCopier) {
// make sure thy are not passing you a null copier:
this.elementCopier = Objects.requiresNonNull(elementCopier);
...
}
...
public boolean push(T element) {
...
data[top] = elementCopier.apply(element);
...
}
...
}
So for example for a cloneable class type where .clone() is in fact a deepCopy the user code would be like:
Stack<MyElemClz> stack = new Stack<>(x -> x.clone());
// or:
Stack<MyElemClz> stack = new Stack<>(MyElemClz::clone);
...
MyElemClaz elem = ...;
...
stack.push(elem);
If the type is an constant simple object like and String there is no need for clonning, in that case the user would indicate identity lambda x -> x as
the copier:
Stack<String> stack = new Stack<>(x -> x)
If the user insists in making a copy even when the class is a constant you can force it:
Stack<String> stack = new Stack<>(x -> new String(x))
// or
Stack<String> stack = new Stack<>(String::new)
One can use an ObjectOutputStream/ObjectInputStream to make a deep copy.
One would then not store an Object (a reference to changeable fields), but the serialized bytes in the stack.
On to it.
An ObjectOutputStream does a deep copy.
If you want to go with an interface, or you don't like Valentin's approach, you could do this:
interface Copiable<T> {
T deepCopy();
}
public class Stack<T extends Copiable<T>> {
...
}
and then implement the deepCopy method to objects that you put in your stack, i.e.
class A implements Copiable<A> {
#Override
public A deepCopy() {
// ... your copy code here
}
}
Stack<A> stack = new Stack<>();
etc.
I am able to map array to complex type as specified in one of the answers in this link. However my application contains more than 500 classes and it will be very time consuming to identify and map these one by one. I am trying to build a generic method which would do this conversion. For example , complex type to array can be achieved by using the following methods. I am looking for ways to do the reverse operation -
public <T> T map(Object srcObj, Class<?> destClass, String mapId) {
if (srcObj == null) {
return null;
}
if (srcObj.getClass().isArray()) {
return (T) mapArrayToArray((Object[]) srcObj, destClass);
}
return (T) dozerBeanMapper.map(srcObj, destClass, mapId);
}
private Object mapArrayToArray(Object[] srcArray, Class<?> destClass) {
Class<?> componentType = destClass.getComponentType();
Object resultArray = Array.newInstance(componentType, srcArray.length);
for (int i = 0; i < srcArray.length; i++) {
Object resultItem = this.map(srcArray[i], componentType);
Array.set(resultArray, i, resultItem);
}
return resultArray;
}
My problem is this: I have an iterator class which is supposed to iterate through elements in a given data structure, <E> let's say, but what I have managed to accomplish is that when I pass in the data structure it will iterate the data structure itself.
ie. DynamicIterator it = new DynamicIterator(da);
say da is an array the output will be [1,2,3,4,5,6] instead of 1,2,3,4,5,6
My issue is, more than anything, understanding the generally accepted practice for dealing with this more than the issue itself.
edit for code:
public class X<E>
{
private final E[] rray;
private int currentIndex = 0;
public X(E... a)
{
//if the incoming array is null, don't start
if(a == null)
{
System.out.println("Array is null");
System.exit(1);
}
//set the temp array (rray) to the incoming array (a)
this.rray = a;
}
//hasNext element?
public boolean hasNext()
{
return rray.length > currentIndex;
}
//next element (depends on hasNext())
public E next()
{
if (!hasNext())
{
System.out.println("Element doesn't exist, done");
System.exit(1);
}
return rray[currentIndex++];
}
//return array
public E[] access()
{
return rray;
}
}
You won't be able to do this with a completely generic parameter <E> - how would you iterate through a Throwable, for example? What your class X does at the moment is accept any number of objects in its constructor, and then simply returns each of those objects in turn.
If you restricted the bounds of the objects passed in to implement e.g. Iterable, then you can actually start to "look inside" them and return their contents:
public class X<E> {
private final Iterator<E> it;
public X(Iterable<E> a) {
it = a.iterator();
}
public boolean hasNext() {
return it.hasNext();
}
public E next() {
return it.next();
}
}
Although this doesn't really accomplish anything different to just using a.iterator() directly instead of an instance of X...
I recently came across a very stupid (at least from my point of view) implementation inside Androids Parcel class.
Suppose I have a simple class like this
class Foo implements Parcelable{
private String[] bars;
//other members
public in describeContents(){
return 0;
}
public void writeToParcel(Parcel dest, int flags){
dest.writeStringArray(bars);
//parcel others
}
private Foo(Parcel source){
source.readStringArray(bars);
//unparcel other members
}
public static final Parcelable.Creator<Foo> CREATOR = new Parcelable.Creator<Foo>(){
public Foo createFromParcel(Parcel source){
return new Foo(source);
}
public Foo[] newArray(int size){
return new Foo[size];
}
};
}
Now, if I want to Parcel a Foo Object and bars is null I see no way to recover from this situation (exept of catching Exceptions of course). Here is the implementation of these two methods from Parcel:
public final void writeStringArray(String[] val) {
if (val != null) {
int N = val.length;
writeInt(N);
for (int i=0; i<N; i++) {
writeString(val[i]);
}
} else {
writeInt(-1);
}
}
public final void readStringArray(String[] val) {
int N = readInt();
if (N == val.length) {
for (int i=0; i<N; i++) {
val[i] = readString();
}
} else {
throw new RuntimeException("bad array lengths");
}
}
So writeStringArray is fine if I pass bars which are null. It just writes -1 to the Parcel. But How is the method readStringArray supposed to get used? If I pass bars inside (which of course is null) I will get a NullPointerException from val.length. If I create bars before like say bars = new String[???] I don't get any clue how big it should be. If the size doesn't match what was written inside I recieve a RuntimeException.
Why is readStringArray not aware of a result of -1 which gets written on null objects from writeStringArray and just returns?
The only way I see is to save the size of bars before I call writeStringArray(String[]) which makes this method kind of useless. It will also redundatly save the size of the Array twice (one time for me to remember, the second time from writeStringArray).
Does anyone know how these two methods are supposed to be used, as there is NO java-doc for them on top?
You should use Parcel.createStringArray() in your case.
I can't imagine a proper use-case for Parcel.readStringArray(String[] val) but in order to use it you have to know the exact size of array and manually allocate it.
It's not really clear from the (lack of) documentation but readStringArray() is to be used when the object already knows how to create the string array before calling this function; for example when it's statistically instanciated or it's size is known from another previously read value.
What you need here is to call the function createStringArray() instead.
In this I am trying to sort out the intV and stringV using this getSmallestValue method. Tried different ideas but does not seems to be working. Anyone have any bright ideas how to implement this getSmallestValue method?
public class test {
public static Comparable getSmallestValue(Vector<Comparable> a) {
Comparator com = Collections.reverseOrder();
Collections.sort(a, com);
return (Comparable) a;
}
public static void main(String[] args) {
Vector<Comparable> intV = new Vector<Comparable>();
intV.add(new Integer(-1));
intV.add(new Integer(56));
intV.add(new Integer(-100));
int smallestInt = (Integer) getSmallestValue(intV);
System.out.println(smallestInt);
Vector<Comparable> stringV = new Vector<Comparable>();
stringV.add("testing");
stringV.add("Pti");
stringV.add("semesterGoes");
String smallestString = (String) getSmallestValue(stringV);
System.out.println(smallestString);
}
}
Welcome to StackOverflow.
Your basic problem is that you have tried to turn a Vector into an Integer which you cannot do.
What is likely to be more useful is to use the first element of the vector.
I would suggest you
use List instead of Vector.
I wouldn't use manual wrapping
define the getSmallestValue using generics to avoid confusion.
Here are two ways you could implement this method.
public static <N extends Comparable<N>> N getSmallestValue(List<N> a) {
Collections.sort(a);
return a.get(0);
}
public static <N extends Comparable<N>> N getSmallestValue2(List<N> a) {
return Collections.min(a);
}
List<Integer> ints = Arrays.asList(-1, 56, -100);
int min = getSmallestValue(ints);
// or
int min = Collections.min(ints);
Use Collections.min().You can check out the source if you you want to know how it's implemented.
Vector<Integer> v=new Vector<Integer>();
v.add(22);v.add(33);
System.out.println(Collections.min(v));