I have an object of type X which I want to convert into byte array before sending it to store in S3. Can anybody tell me how to do this? I appreciate your help.
What you want to do is called "serialization". There are several ways of doing it, but if you don't need anything fancy I think using the standard Java object serialization would do just fine.
Perhaps you could use something like this?
package com.example;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
public class Serializer {
public static byte[] serialize(Object obj) throws IOException {
try(ByteArrayOutputStream b = new ByteArrayOutputStream()){
try(ObjectOutputStream o = new ObjectOutputStream(b)){
o.writeObject(obj);
}
return b.toByteArray();
}
}
public static Object deserialize(byte[] bytes) throws IOException, ClassNotFoundException {
try(ByteArrayInputStream b = new ByteArrayInputStream(bytes)){
try(ObjectInputStream o = new ObjectInputStream(b)){
return o.readObject();
}
}
}
}
There are several improvements to this that can be done. Not in the least the fact that you can only read/write one object per byte array, which might or might not be what you want.
Note that "Only objects that support the java.io.Serializable interface can be written to streams" (see java.io.ObjectOutputStream).
Since you might run into it, the continuous allocation and resizing of the java.io.ByteArrayOutputStream might turn out to be quite the bottle neck. Depending on your threading model you might want to consider reusing some of the objects.
For serialization of objects that do not implement the Serializable interface you either need to write your own serializer, for example using the read*/write* methods of java.io.DataOutputStream and the get*/put* methods of java.nio.ByteBuffer perhaps together with reflection, or pull in a third party dependency.
This site has a list and performance comparison of some serialization frameworks. Looking at the APIs it seems Kryo might fit what you need.
Use serialize and deserialize methods in SerializationUtils from commons-lang.
Yeah. Just use binary serialization. You have to have each object use implements Serializable but it's straightforward from there.
Your other option, if you want to avoid implementing the Serializable interface, is to use reflection and read and write data to/from a buffer using a process this one below:
/**
* Sets all int fields in an object to 0.
*
* #param obj The object to operate on.
*
* #throws RuntimeException If there is a reflection problem.
*/
public static void initPublicIntFields(final Object obj) {
try {
Field[] fields = obj.getClass().getFields();
for (int idx = 0; idx < fields.length; idx++) {
if (fields[idx].getType() == int.class) {
fields[idx].setInt(obj, 0);
}
}
} catch (final IllegalAccessException ex) {
throw new RuntimeException(ex);
}
}
Source.
As i've mentioned in other, similar questions, you may want to consider compressing the data as the default java serialization is a bit verbose. you do this by putting a GZIPInput/OutputStream between the Object streams and the Byte streams.
To convert the object to a byte array use the concept of Serialization and De-serialization.
The complete conversion from object to byte array explained in is tutorial.
http://javapapers.com/core-java/java-serialization/
Q. How can we convert object into byte array?
Q. How can we serialize a object?
Q. How can we De-serialize a object?
Q. What is the need of serialization and de-serialization?
Related
The Problem
I am attempting to pass a collection of JNA structures to a native method but it's proving very fiddly:
Let's say we have a structure:
class MyStructure extends Structure {
// fields...
}
and a method in a JNA interface:
void pass(MyStructure[] data);
which maps to the native method:
void pass(const MYStructure* data);
Now the complication comes from the fact that the application is building a collection of these structures dynamically, i.e. we are NOT dealing with a static array but something like this:
class Builder {
private final Collection<MyStructure> list = new ArrayList<>();
// Add some data
public void add(MyStructure entry) {
list.add(entry);
}
// Pass the data to the native library
public void pass() {
// TODO
}
}
A naive implementation of the pass() method could be:
MyStructure[] array = list.toArray(MyStucture[]::new);
api.pass(array);
(where lib is the JNA library interface).
Of course this doesn't work because the array is not a contiguous block of memory - fair enough.
Rubbish Solution #1
One solution is to allocate a JNA array from a structure instance and populate it field-by-field:
MYStructure[] array = (MyStructure[]) new MyStructure().toArray(size);
for(int n = 0; n < array.length; ++n) {
array[n].field = list.get(n).field;
// other fields...
}
This guarantees the array consist of contiguous memory. But we have had to implement a field-by-field copy of the data (which we've already populated in the list) - this is OK for a simple structure, but some of the data I am dealing with has dozens of fields, structures that point to further nested arrays, etc. Basically this approach is just not viable.
Rubbish Solution #2
Another alternative is to convert the collection of data to a simple JNA pointer, something along these lines:
MyStructure[] array = list.toArray(MyStructure[]::new);
int size = array[0].size();
Memory mem = new Memory(array.length * size);
for(int n = 0; n < array.length; ++n) {
if(array[n] != null) {
array[n].write();
byte[] bytes = array[n].getPointer().getByteArray(0, size);
mem.write(n * size, bytes, 0, bytes.length);
}
}
This solution is generic so we can apply it to other structure as well. But we have to change the method signatures to be Pointer instead of MyStructure[] which makes the code more obtuse, less self-documenting and harder to test. Also we could be using a third-party library where this might not even be an option.
(Note I asked a similar question a while ago here but didn't get a satisfactory answer, thought I'd try again and I'll delete the old one / answer both).
Summary
Basically I was expecting/hoping to have something like this:
MyStructure[] array = MyStructure.magicContiguousMemoryBlock(list.toArray());
similar to how the JNA helper class provides StringArray for an array-of-string:
StringArray array = new StringArray(new String[]{...});
But no such 'magic' exists as far as I can tell. Is there another, simpler and more 'JNA' way of doing it? It seems really dumb (and probably incorrect) to have to allocate a byte-by-byte copy of the data that we essentially already have!
Do I have any other options? Any pointers (pun intended) gratefully accepted.
As the author of the previous answer, I realize a lot of the confusion was approaching it one way before realizing a better solution that we discussed primarily in comments to your answer. I will try to answer this additional clarification with an actual demonstration of my suggestion on that answer which I think is the best approach. Simply, if you have a non-contiguous structure and need a contiguous structure, you must either bring the contiguous memory to the structure, or copy the structure to the contiguous memory. I'll outline both approaches below.
Is there another, simpler and more 'JNA' way of doing it? It seems really dumb (and probably incorrect) to have to allocate a byte-by-byte copy of the data that we essentially already have!
I did mention in my answer on the other question that you could use useMemory() in this situation. It is a protected method but if you are already extending a Structure you have access to that method from the subclass (your structure), in much the same way (and for precisely the same purpose) as you would extend the Pointer constructor of a subclass.
You could therefore take an existing structure in your collection and change its native backing memory to be the contiguous memory. Here is a working example:
public class Test {
#FieldOrder({ "a", "b" })
public static class Foo extends Structure {
public int a;
public int b;
// You can either override or create a separate helper method
#Override
public void useMemory(Pointer m) {
super.useMemory(m);
}
}
public static void main(String[] args) {
List<Foo> list = new ArrayList<>();
for (int i = 1; i < 6; i += 2) {
Foo x = new Foo();
x.a = i;
x.b = i + 1;
list.add(x);
}
Foo[] array = (Foo[]) list.get(0).toArray(list.size());
// Index 0 copied on toArray()
System.out.println(array[0].toString());
// but we still need to change backing memory for it to the copy
list.get(0).useMemory(array[0].getPointer());
// iterate to change backing and write the rest
for (int i = 1; i < array.length; i++) {
list.get(i).useMemory(array[i].getPointer());
list.get(i).write();
// Since sending the structure array as an argument will auto-write,
// it's necessary to sync it here.
array[1].read();
}
// At this point you could send the contiguous structure array to native.
// Both list.get(n) and array[n] point to the same memory, for example:
System.out.println(list.get(1).toString());
System.out.println(array[1].toString());
}
Output (note the contiguous allocation). The second two outputs are the same, from either the list or the array.
Test$Foo(allocated#0x7fb687f0d550 (8 bytes) (shared from auto-allocated#0x7fb687f0d550 (24 bytes))) {
int a#0x0=0x0001
int b#0x4=0x0002
}
Test$Foo(allocated#0x7fb687f0d558 (8 bytes) (shared from allocated#0x7fb687f0d558 (8 bytes) (shared from allocated#0x7fb687f0d558 (8 bytes) (shared from allocated#0x7fb687f0d550 (8 bytes) (shared from auto-allocated#0x7fb687f0d550 (24 bytes)))))) {
int a#0x0=0x0003
int b#0x4=0x0004
}
Test$Foo(allocated#0x7fb687f0d558 (8 bytes) (shared from allocated#0x7fb687f0d558 (8 bytes) (shared from allocated#0x7fb687f0d550 (8 bytes) (shared from auto-allocated#0x7fb687f0d550 (24 bytes))))) {
int a#0x0=0x0003
int b#0x4=0x0004
}
If you don't want to put useMemory in every one of your structure definitions you can still put it in an intermediate class that extends Structure and then extend that intermediate class instead of Structure.
If you don't want to override useMemory() in your structure definitions (or a superclass of them), you can still do it "simply" in code with a little bit of inefficiency by copying over the memory.
In order to "get" that memory to write it elsewhere, you have to either read it from the Java-side memory (via reflection, which is what JNA does to convert the structure to the native memory block), or read it from Native-side memory (which requires writing it there, even if all you want to do is read it). Under-the-hood, JNA is writing the native bytes field-by-field, all hidden under a simple write() call in the API.
Your "Rubbish Solution #2" seems close to what's desired in this case. Here are the constraints that we have to deal with, with whatever solution:
In the existing list or array of Structure, the native memory is not contiguous (unless you pre-allocate contiguous memory yourself, and use that memory in a controlled manner, or override useMemory() as demonstrated above), and the size is variable.
The native function taking an array argument expects a block of contiguous memory.
Here are the "JNA ways" of dealing with structures and memory:
Structures have native-allocated memory at a pointer value accessible via Structure.getPointer() with a size of (at least) Structure.size().
Structure native memory can be read in bulk using Structure.getByteArray().
Structures can be constructed from a pointer to native memory using the new Structure(Pointer p) constructor.
The Structure.toArray() method creates an array of structures backed by a large, contiguous block of native memory.
I think your solution #2 is a rather efficient way of doing it, but your question indicates you'd like more type safety, or at least self-documenting code, in which case I'd point out a more "JNA way" of modifying #2 with two steps:
Replace the new Memory(array.length * size) native allocation with the Structure.toArray() allocation from your solution #1.
You still have a length * size block of contiguous native memory and a pointer to it (array[0].getPointer()).
You additionally have pointers to the offsets, so you could replace mem.write(n * size, ... ) with array[n].getPointer().write(0, ... ).
There is no getting around the memory copying, but having two well-commented lines which call getByteArray() and immediately write() that byte array seem clear enough to me.
You could even one-line it... write(0, getByteArray(0, size), 0, size), although one might argue if that's more or less clear.
So, adapting your method #2, I'd suggest:
// Make your collection an array as you do, but you could just keep it in the list
// using `size()` and `list.get(n)` rather than `length` and `array[n]`.
MyStructure[] array = list.toArray(MyStructure[]::new);
// Allocate a contiguous block of memory of the needed size
// This actually writes the native memory for index 0,
// so you can start the below iteration from 1
MyStructure[] structureArray = (MyStructure[]) array[0].toArray(array.length);
// Iterate the contiguous memory and copy over bytes from the array/list
int size = array[0].size();
for(int n = 1; n < array.length; ++n) {
if(array[n] != null) {
// sync local structure to native (using reflection on fields)
array[n].write();
// read bytes from the non-contiguous native memory
byte[] bytes = array[n].getPointer().getByteArray(0, size);
// write bytes into the contiguous native memory
structureArray[n].getPointer().write(0, bytes, 0, bytes.length);
// sync native to local (using reflection on fields)
structureArray[n].read();
}
}
From a "clean code" standpoint I think this rather effectively accomplishes your goal. The one "ugly" part of the above method is that JNA doesn't provide an easy way to copy fields between Structures without writing them to native memory in the process. Unfortunately that's the "JNA way" of "serializing" and "deserializing" objects, and it's not designed with any "magic" for your use case. Strings include built-in methods to convert to bytes, making such "magic" methods easier.
It is also possible to avoid writing the structure to native memory just to read it back again if you do the field-by-field copy as you implied in your Method #1. However, you could use JNA's field accessors to make it a lot easier to access the reflection under the hood. The field methods are protected so you'd have to extend Structure to do this -- which if you're doing that, the useMemory() approach is probably better! But you could then pull this iteration out of write():
for (StructField sf : fields().values()) {
// do stuff with sf
}
My initial thought would be to iterate over the non-contiguous Structure fields using the above loop, storing a Field.copy() in a HashMap with sf.name as the key. Then, perform that same iteration on the other (contiguous) Structure object's fields, reading from the HashMap and setting their values.
If you able to create a continues block of memory, why don't you simply de-serialize your list into it.
I.e. something like:
MyStructure[] array = list.get(0).toArray(list.size());
list.toArray(array);
pass(array);
In any case you'd better not to store Structure in your List or any another collection. It is better idea to hold a POJO inside, and then remap it to array of structures directly using a bean mapping library or manually.
With MapStruct bean mapping library it may looks like:
#Mapper
public interface FooStructMapper {
FooStructMapper INSTANCE = Mappers.getMapper( FooStructMapper.class );
void update(FooBean src, #MappingTarget MyStruct dst);
}
MyStrucure[] block = new MyStructure().toArray(list.size());
for(int i=0; i < block.length; i++) {
FooStructMapper.INSTANCE.update(list.get(i), block[i]);
}
What the point - Structure constructor allocates memory block using Memory, it is really slow operation. As well as memory allocated outside of java heap space. It is always better to avoid this allocate whenever you can.
The solutions offered by Daniel Widdis will solve this 'problem' if one really needs to perform a byte-by-byte copy of a JNA structure.
However I have come round to the way of thinking expressed by some of the other posters - JNA structures are intended purely for marshalling to/from the native layer and should not really be used as 'data'. We should be defining domain POJOs and transforming those to JNA structures as required - a bit more work but deal with I guess.
EDIT: Here is the solution that I eventually implemented using a custom stream collector:
public class StructureCollector <T, R extends Structure> implements Collector<T, List<T>, R[]> {
/**
* Helper - Converts the given collection to a contiguous array referenced by the <b>first</b> element.
* #param <T> Data type
* #param <R> Resultant JNA structure type
* #param data Data
* #param identity Identity constructor
* #param populate Population function
* #return <b>First</b> element of the array
*/
public static <T, R extends Structure> R toArray(Collection<T> data, Supplier<R> identity, BiConsumer<T, R> populate) {
final R[] array = data.stream().collect(new StructureCollector<>(identity, populate));
if(array == null) {
return null;
}
else {
return array[0];
}
}
private final Supplier<R> identity;
private final BiConsumer<T, R> populate;
private final Set<Characteristics> chars;
/**
* Constructor.
* #param identity Identity structure
* #param populate Population function
* #param chars Stream characteristics
*/
public StructureCollector(Supplier<R> identity, BiConsumer<T, R> populate, Characteristics... chars) {
this.identity = notNull(identity);
this.populate = notNull(populate);
this.chars = Set.copyOf(Arrays.asList(chars));
}
#Override
public Supplier<List<T>> supplier() {
return ArrayList::new;
}
#Override
public BiConsumer<List<T>, T> accumulator() {
return List::add;
}
#Override
public BinaryOperator<List<T>> combiner() {
return (left, right) -> {
left.addAll(right);
return left;
};
}
#Override
public Function<List<T>, R[]> finisher() {
return this::finish;
}
#SuppressWarnings("unchecked")
private R[] finish(List<T> list) {
// Check for empty data
if(list.isEmpty()) {
return null;
}
// Allocate contiguous array
final R[] array = (R[]) identity.get().toArray(list.size());
// Populate array
final Iterator<T> itr = list.iterator();
for(final R element : array) {
populate.accept(itr.next(), element);
}
assert !itr.hasNext();
return array;
}
#Override
public Set<Characteristics> characteristics() {
return chars;
}
}
This nicely wraps up the code that allocates and populates a contiguous array, example usage:
class SomeDomainObject {
private void populate(SomeStructure struct) {
...
}
}
class SomeStructure extends Structure {
...
}
Collection<SomeDomainObject> collection = ...
SomeStructure[] array = collection
.stream()
.collect(new StructureCollector<>(SomeStructure::new, SomeStructure::populate));
Hopefully this might help anyone that's doing something similar.
I need to parse the same json stream twice, one time to identify say the length of array in the json stream, and next to parse the entities. However, there is only a single instance of JsonParser to start with. Is there a way I can clone this or create a copy of this because once the instance is used to parse, it can't be reused for re-parsing the same json stream obviously.
Thanks in advance.
Example:
static class ResultEntitiesContainer {
List<ResultEntity> resultEntities;
// getter and setters available
}
void parseEntities(JsonParser parser) {
// Need to extract number of entities.
int count=0;
ObjectMapper om = new ObjectMapper();
JsonNode node = om.readTree(parser);
node = node.get("resultEntities");
if (node.isArray()) {
count = node.size();
}
// Need to parse the entities in the json node
ResultEntitiesContainer rec = om.readValue(parser, ResultEntitiesContainer.class);
}
This answer aims to address the question of cloning the JsonParser assuming it is required.
com.fasterxml.jackson.core.JsonParser is a public abstract class and it does not provide a clone or similar method.
An abstract class may be extended by different implementations that the author of JsonParser.java has no control of.
Similarly it is not safe to clone a JsonParser as an argument of void parseEntities(JsonParser parser); because the author of parseEntities cannot be sure which implementation is used and whether it can be cloned.
However if you (as the author of parseEntities) do have control over the used implementations, then it is safe to clone the known implementations (assuming this is possible).
So if you do know which specific implementation (or implementations) of JsonParser your class will be using, you can try and clone specifically these known implementations.
E.g. add and implemented one or more methods (as needed) like:
void parseEntities(MyJsonParser parser);
void parseEntities(MyOtherJsonParser parser);
Then it is a question of cloning the specific implementations of JsonParser that are used. For instance assuming MyJsonParser supports cloning the following could be valid.
void parseEntities(MyJsonParser parser){
MyJsonParser clonedParser=parser.clone();//depends on implementation
...
}
As far as I can see, there is no need to parse twice. Just parse it once into an object of type ResultEntitiesContainer and count the elements in the list to get count. You could change method parseEntities as follows:
void parseEntities(JsonParser parser) {
ObjectMapper om = new ObjectMapper();
// Need to parse the entities in the json node
ResultEntitiesContainer rec = om.readValue(parser, ResultEntitiesContainer.class);
// Need to extract number of entities.
int count = rec.getResultEntities().size();
}
Alternatively you can parse to object ResultEntitiesContainer from json node as follows:
ResultEntitiesContainer rec = om.treeToValue(node, ResultEntitiesContainer.class);
Remark:
Please double check if ResultEntitiesContainer should be static.
I have an object from old Java code, and I now changed the serialized object code. I want to be able to read both the old files and the new files. I need a branching statement in readObject to do something like:
if (next object is int -- just poking, it might be an Object)
{
// we know we are in version 1
} else {
// read new version of object
}
is that possible to do?
Ok so basically the question is "How can we check with an ObjectInputStream whether the next field is a primitive or an object?" and the answer as far as I can see is: You can't.
Which means the best solution I can see to keep backwards compatibility is to never remove primitives from the original version - keeping useless information blows up the size a bit, but otherwise that's easy.
To add new fields, I can see two ways:
Keep the earlier message format identical and only add new objects at the end - you can then easily distinguish different versions by the message size (or more exactly you'll get an IOException when reading data of v2 when you get a v1 object). That's much simpler and generally preferred.
You can change objects from v1 to v2 and then do a simple instanceof check. If you want to add primitives is to store their wrapper versions (i.e. Integer et al). Can save you some bytes, but Java's serialization protocol was never efficient to begin with, so really that's unnecessary complicated.
if (object instanceof Integer) {
... Do stuff
} else {
... Do other stuff
}
EDIT: I suppose I should expand on this. You can check object types using instanceof but I'm not sure about being able to work with primatives like int or char.
The easiest way to do this is to keep the old member variables with their old types and add new member variables for new types. also, you must keep the serialVersionUID of the class the same. then, your readObject() implementation can do any necessary manipulation to transform the old data to new data.
Original Object:
public class MyObject {
private static final long serialVersionUID = 1234L;
private int _someVal;
}
New version:
public class MyObject {
private static final long serialVersionUID = 1234L;
private int _someVal; //obsolete
private String _newSomeVal;
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException
{
in.defaultReadObject();
if(_someVal != 0) {
// translate _someVal to _newSomeVal
}
}
}
I believe there are more complex options available as well using custom ObjectStreamField[] serialPersistentFields, ObjectInputStream.GetField and ObjectOutputStream.PutField.
ObjectInputStream will load and create an instance of the right class.
object = ois.readObject();
if (object instanceof YourNewShiny ){
// new style object
} else if (object instanceof YourOldBusted ){
// convert YourOldBusted to YourNewShiny
} else {
throw new ToyOutOfPram();
}
This is all great if you have a new class, but if you have changed your class in an incompatible manner such that the ObjectInputStream cannot deserialise the old versions of the class into the new form. If this is the case you are pretty much stuffed.
Options:
Revert your changes, and either do them compatibly i.e. add a serialVersionId, don't change the order of fields, only add new field, plus don't assume not null constraints
Using an old version of the code read the serialised data, convert it to some intermediate form (xml,csv,etc) and then import this data into the new class definition and serialise it out
manually re implement the ObjectInputStream to detect your class type (you can use the serialVersionId to stiff the type)
Only the first seems like a good idea to me.
In below class, Store class has exactly 1 fruit as a field variable.
I want Store class to do the following two things. One is returning the fruit's data only with read access, and the other is returning the fruit's data with the write access. The data returned has type ByteBuffer.
For example, if someone get the ByteBuffer through getRead, I don't want ByteBuffer to be modified at all. However, if someone access ByteBuffer through getWrite, then I am allowing him to modify the contents of memory pointed by ByteBuffer.
class Fruit {
private ByteBuffer data;
public ByteBuffer getData(){
return data;
}
}
class Store {
Fruit p;
public ByteBuffer getRead(){
return p.getData();
}
public ByteBuffer getWrite(){
return p.getData();
}
}
Is there anyway that I can control this access privilege in Java when I am using ByteBuffer? Or, should I have 2 variables in Fruit class that has the same value but does the different thing?
In this particular case it's really easy using asReadOnlyBuffer:
public ByteBuffer getRead(){
return p.getData().asReadOnlyBuffer();
}
In general, there has to be some sort of wrapping object (as there is here) - Java doesn't have the concept of a read-only "view" on an object from a language point of view; it has to be provided by code which prevents any mutations.
I would like to call a method which could potentially take on different versions, i.e. the same method for input parameters that are of type:
boolean
byte
short
int
long
The way I would like to do this is by "overloading" the method (I think that is the correct term?):
public void getValue(byte theByte) {...}
public void getValue(short theShort) {...}
... etc ...
... but that would mean that I would have to pass the primitive type in by reference... similar to C++ where the method has external effect, where it can modify the variable outside its scope.
Is there a way to do this without creating new classes or using the Object versions of the primitive types? If not, any suggestions on alternative strategies?
Let me know if I should further explain to clear up any confusion.
UPDATE
What I'm actually trying to do is construct the primitive type from a set of bits. So if I'm dealing with the byte version of the method, I want to pretty much do my work to get 8 bits and return the byte (since I can't pass by reference).
The reason I'm asking this question is because the work I do with bits is very repetitive and I don't want to have the same code in different methods. So I want to find a way for my ONE method to KNOW how many bits I'm talking about... if I'm working with a byte, then 8 bits, if I'm working with a short, 16 bits, etc...
Java is always pass-by-value. There is no pass-by-reference in Java. It's written in the specs!
While Java supports overloading, all parameters are passed by value, i.e. assigning a method argument is not visible to the caller.
From your code snippet, you are trying to return a value of different types. Since return types are not part of a method's signature, you can not overload with different return types. Therefore, the usual approach is:
int getIntValue() { ... }
byte getByteValue() { ... }
If this is actually a conversion, the standard naming is
int toInt() { ...}
byte toByte() { ... }
You can't. In Java parameters are always passed by value. If the parameter is a reference type, the reference is passed by value and you can modify it inside the method while with primitive types this is not possible.
You will need to create a wrapper type.
Primitives are not passed by references (or objects for that matter) so no you cannot.
int i = 1;
moo(i);
public void moo(int bah)
{
bah = 3;
}
System.out.println(i);
Prints out 1
I would say the alternative strategy, if you want to work with primitives, is to do what the Java Libraries do. Just suck it up and have multiple methods.
For example, ObjectInputStream has readDouble(), readByte(), etc.
You're not gaining anything by sharing an implementation of the function, and the clients of your function aren't gaining anything by the variants of your function all having the same name.
UPDATE
Considering your update, I don't think it's necessary to duplicate too much code. It depends on your encoding strategy but I would imagine you could do something like this:
private byte get8Bits();
public byte getByte() {
return get8Bits();
}
public int getInt() {
return (get8Bits() << 24) | (get8Bits() << 16) | (get8Bits() << 8) | get8Bits();
}
Anything that shares code more than that is probably over-engineering.
An alternative could be
private long getBits(int numBits);
public byte getByte() {
return (byte)getBits(8);
}
public int getInt() {
return (int)getBits(32);
}
i.e. I don't think it makes sense to expose the users of your library to anything other than the primitive types themselves.
If you really, really wanted to then you could make a single method for access like this:
#SuppressWarnings("unchecked")
public static <T> T getValue(Class<T> clazz) {
if ( clazz == byte.class ) {
return (T)Byte.valueOf((byte)getBits(8));
} else if ( clazz == int.class ) {
return (T)Integer.valueOf((int)getBits(32));
}
throw new UnsupportedOperationException(clazz.toString());
}
//...
byte b = getValue(byte.class);
int i = getValue(int.class);
But I fail to see how it's any less cumbersome for clients of your library.
The object types of primitive types in Java (Double, Integer, Boolean, etc) are, if I remember correctly, immutable. This means that you cannot change the original value inside a method they are passed into.
There are two solutions to this. One is to make a wrapper type that holds the value. If all you are attempting to do is change the value or get a calculation from the value, you could have the method return the result for you. To take your examples:
public byte getValue(byte theByte) {...}
public short getValue(short theShort) {...}
And you would call them by the following:
Short s = 0;
s = foo.getValue(s);
or something similar. This allows you to mutate or change the value, and return the mutated value, which would allow something like the following:
Short s = foo.getValue(10);
Hope that helps.
Yes, please be more specific about what you want to achieve.
From your description I suggest you have a look at Java generics where you could write something like this:
class SomeClass <GenericType> {
GenericType val;
void setValue(GenericType val) {
this.val = val;
}
GenericType getValue() {
return val;
}
public static void main(String[] args) {
SomeClass<Integer> myObj = new SomeClass<Integer>();
myObj.setValue(5);
System.out.println(myObj.getValue());
SomeClass<String> myObj2 = new SomeClass<String>();
myObj2.setValue("hello?!");
System.out.println(myObj2.getValue());
}
}
Sounds like you have a set of bits that you're parsing through. You should have it wrapped in an object, lets call that object a BitSet. You're iterating through the bits, so you'll have something like an Iterator<Bit>, and as you go you want to parse out bytes, ints, longs, etc... Right?
Then you'll have your class Parser, and it has methods on it like:
public byte readByte(Iterator<Bit> bitit) {
//reads 8 bits, which moves the iterator forward 8 places, creates the byte, and returns it
}
public int readInt(Iterator<Bit> bitit) {
//reads 32 bits, which moves the iterator forward 32 places, creates the int, and returns it
}
etc...
So after you call whichever method you need, you've extracted the value you want in a typesafe way (different return types for different methods), and the Iterator has been moved forward the correct number of positions, based on the type.
Is that what you're looking for?
Only by creating your own value holding types.