Related
Javassist's CtClass has a few methods, such as getFields() and getMethods(). I was wondering if these methods offer any guarantee as to their ordering.
Specifically, I'd like to know if using getFields() on a class will produce and array of CtFields where the first field in the array is the first field declared in the file and so forth. So, is this order guaranteed? If not, is there anything that can offer this guarantee? The javadocs give no information on the matter. I've tried annotations that carry order (e.g. #X(1) private int first;), but it'd be easier if I could reflect this sort of thing without the need for annotations?
If it's still not clear, I'd like something like:
public class Class {
public int x;
public float y;
public Object z;
}
to produce an array of CtFields specifically and consistently ordered x, y, z.
If you look at one of the implementations of CtClass, like CtClassType, you will see that getMethods() is implemented that way:
public CtMethod[] getMethods() {
HashMap h = new HashMap();
getMethods0(h, this);
return (CtMethod[])h.values().toArray(new CtMethod[h.size()]);
}
According to the HashMap javadoc:
This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.
So, you have absolutely no guarantee in what order you will get the methods.
For the fields, the code is a lot more complicated, but it seems that the raw class file is read as an InputStream:
DataInputStream in = new DataInputStream(
new FileInputStream(args[0]));
ClassFile w = new ClassFile(in);
And the fields are created as they are read from this stream:
fields = new ArrayList();
for (i = 0; i < n; ++i)
addField2(new FieldInfo(cp, in));
So, the fields are created in the order they are in the class file.
However, reading the JVM Specification and the Java Language Specification, I see no reference to the fields order in the generated class; this sentence from the Java Specification even seems to indicate that the fields order is not important:
Using their scheme, here is a list of some important binary compatible changes that the Java programming language supports:
[...]
Reordering the fields, methods, or constructors in an existing type declaration.
So I think that you have absolutely no guarantee on the fields order too.
I tried to run javassist.tools.Dump on a test class I created with a large number of fields and methods, and it seems that fields and methods are printed in the source order, but I still think that nothing guarantee it.
Here is my problem (big picture). I have a project which uses large and complicated (by which I mean contains multiple levels of nested structures) Matlab structures. This is predictably slow (especially when trying to load / save). I am attempting to improve runtimes by converting some of these structures into Java Objects. The catch is that the data in these Matlab structures is accessed in a LOT of places, so anything requiring a rewrite of access syntax would be prohibitive. Hence, I need the Java Objects to mimic as closely as possible the behavior of Matlab structures, particularly when it comes to accessing the values stored within them (the values are only set in one place, so the lack of operator overloading in java for setting isn't a factor for consideration).
The problem (small picture) that I am encountering lies with accessing data from an array of these structures. For example,
person(1)
.age = 20
.name
.first = 'John'
.last = 'Smith
person(2)
.age = 25
.name
.first = 'Jane'
.last = 'Doe'
Matlab will allow you to do the following,
>>age = [person(1:2).age]
age =
20 25
Attempting to accomplish the same with Java,
>>jperson = javaArray('myMatlab.Person', 2);
>>jperson(1) = Person(20, Name('John', 'Smith'));
>>jperson(2) = Person(25, Name('Jane', 'Doe'));
>>age = [jperson(1:2).age]
??? No appropriate method or public field age for class myMatlab.Person[]
Is there any way that I can get the Java object to mimic this behavior?
The first thought I had was to simply extend the Person[] class, but this doesn't appear to be possible because it is final. My second approach was to create a wrapper class containing an ArrayList of Person, however I don't believe this will work either because calling
wrappedPerson(1:2)
would either be interpreted as a constructor call to a wrappedPerson class or an attempt to access elements of a non-existent array of WrappedPerson (since java won't let me override a "()" operator). Any insight would be greatly appreciated.
The code I am using for my java class is
public class Person {
int _age;
ArrayList<Name> _names;
public Person(int age, Name name) {
_age = age;
_names.add(name);
}
public int age() {return _age;}
public void age(int age) {_age = age;}
public Name[] name() {return _names.toArray(new Name[0]);}
public void name(Name name) { _names.add(name);}
}
public class Name {
String _first;
String _last;
public Name(String first, String last) {
_first = first;
_last = last;
}
public int first() {return _first;}
public void first(String firstName) {_first = firstName;}
public int last() {return _last;}
public void last(String lastName) {_last = lastName;}
}
TL;DR: It's possible, with some fancy OOP M-code trickery. Altering the behavior of () and . can be done with a Matlab wrapper class that defines subsref on top of your Java wrapper classes. But because of the inherent Matlab-to-Java overhead, it probably won't end up being any faster than normal Matlab code, just a lot more complicated and fussy. Unless you move the logic in to Java as well, this approach probably won't speed things up for you.
I apologize in advance for being long-winded.
Before you go whole hog on this, you might benchmark the performance of Java structures as called from your Matlab code. While Java field access and method calls are much faster on their own than Matlab ones, there is substantial overhead to calling them from M-code, so unless you push a lot of the logic down in to Java as well, you might well end up with a net loss in speed. Every time you cross the M-code to Java layer, you pay. Have a look at the benchmark over at this answer: Is MATLAB OOP slow or am I doing something wrong? to get an idea of scale. (Full disclosure: that's one of my answers.) It doesn't include Java field access, but it's probably on the order of method calls due to the autoboxing overhead. And if you are coding Java classes as in your example, with getter and setter methods instead instead of public fields (that is, in "good" Java style), then you will be incurring the cost of Java method calls with each access, and it's going to be bad compared to pure Matlab structures.
All that said, if you wanted to make that x = [foo(1:2).bar] syntax work inside M-code where foo is a Java array, it would basically be possible. The () and . are both evaluated in Matlab before calling to Java. What you could do is define your own custom JavaArrayWrapper class in Matlab OOP corresponding to your Java array wrapper class, and wrap your (possibly wrapped) Java arrays in that. Have it override subsref and subsasgn to handle both () and .. For (), do normal subsetting of the array, returning it wrapped in a JavaArrayWrapper. For the . case:
If the wrapped object is scalar, invoke the Java method as normal.
If the wrapped object is an array, loop over it, invoke the Java method on each element, and collect the results. If the results are Java objects, return them wrapped in a JavaArrayWrapper.
But. Due to the overhead of crossing the Matlab/Java barrier, this would be slow, probably an order of magnitude slower than pure Matlab code.
To get it to work at speed, you could provide a corresponding custom Java class that wraps Java arrays and uses the Java Reflection API to extract the property of each selected array member object and collect them in an array. The key is that when you do a "chained" reference in Matlab like x = foo(1:3).a.b.c and foo is an object, it doesn't do a stepwise evaluation where it evaluates foo(1:3), and then calls .a on the result, and so on. It actually parses the entire (1:3).a.b.c reference, turns that in to a structured argument, and passes the entire thing in to the subsref method of foo, which has responsibility for interpreting the entire chain. The implicit call looks something like this.
x = subsref(foo, [ struct('type','()','subs',{{[1 2 3]}}), ...
struct('type','.', 'subs','a'), ...
struct('type','.', 'subs','b'), ...
struct('type','.', 'subs','c') ] )
So, given that you have access to the entire reference "chain" up front, if foo was a M-code wrapper class that defined subsasgn, you could convert that entire reference to a Java argument and pass it in a single method call to your Java wrapper class which then used Java Reflection to dynamically go through the wrapped array, select the reference elements, and do the chained references, all inside the Java layer. E.g. it would call getNestedFields() in a Java class like this.
public class DynamicFieldAccessArrayWrapper {
private ArrayList _wrappedArray;
public Object getNestedFields(int[] selectedIndexes, String[] fieldPath) {
// Pseudo-code:
ArrayList result = new ArrayList();
if (selectedIndexes == null) {
selectedIndexes = 1:_wrappedArray.length();
}
for (ix in selectedIndexes) {
Object obj = _wrappedArray.get(ix-1);
Object val = obj;
for (fieldName in fieldPath) {
java.lang.reflect.Field field = val.getClass().getField(fieldName);
val = field.getValue(val);
}
result.add(val);
}
return result.toArray(); // Return as array so Matlab can auto-unbox it; will need more type detection to get array type right
}
}
Then your M-code wrapper class would examine the result and decide whether it was primitive-ish and should be returned as a Matlab array or comma-separated list (i.e. multiple argouts, which get collected with [...]), or should be wrapped in another JavaArrayWrapper M-code object.
The M-code wrapper class would look something like this.
classdef MyMJavaArrayWrapper < handle
% Inherit from handle because Java objects are reference-y
properties
jWrappedArray % holds a DynamicFieldAccessArrayWrapper
end
methods
function varargout = subsref(obj, s)
if isequal(s(1).type, '()')
indices = s(1).subs;
s(1) = [];
else
indices = [];
end
% TODO: check for unsupported indexing types in remaining s
fieldNameChain = parseFieldNamesFromArgs(s);
out = getNestedFields( jWrappedArray, indices, fieldNameChain );
varargout = unpackResultsAndConvertIfNeeded(out);
end
end
end
The overhead involved in marshalling and unmarshalling the values for the subsasgn call would probably overwhelm any speed gain from the Java bits.
You could probably eliminate that overhead by replacing your M-code implementation of subsasgn with a MEX implementation that does the structure marshalling and unmarshalling in C, using JNI to build the Java objects, call getNestedFields, and convert the result to Matlab structures. This is way beyond what I could give an example for.
If this looks a bit horrifying to you, I totally agree. You're bumping up against the edges of the language here, and trying to extend the language (especially to provide new syntactic behavior) from userland is really hard. I wouldn't seriously do something like this in production code; just trying to outline the area of the problem you're looking around.
Are you dealing with homogeneous arrays of these deeply nested structures? Maybe it would be possible to convert them to "planar organized" structures, where instead of an array of structs with scalar fields, you have a scalar struct with array fields. Then you can do vectorized operations on them in pure M-code. This would make things a lot faster, especially with save and load, where the overhead scales per mxarray.
Just curious is there any technical limitation in having multiple return values for methods in languages like java, c, c++ or limitation is just by spec? In assembly language I understand callee can pop one value to register.
Because in the days of C there is/was a single register used to hold the return value.
Because if you need more values, you can just return a struct, reference (in Java/C#), or pointer.
Because you can use an out parameter.
Allowing multiple return values would add complexity, and it's simply worked around. There's no reason for it to be there. (Indeed, in C++ you can return a tuple (from TR1, C++11, or boost) which effectively is multiple return values)
Its by design, because there is no need to allow multiple values in return statement. You can always define a struct with all the needed members, and create an instance of the struct and return it. Simple!
Example,
struct Person
{
std::string Name;
int Age;
std::string Qualification;
//...
};
Person GetInfo()
{
Person person;
//fill person's members ...
return person;
}
You can use std::pair, std::vector, std::map, std::list and so on. In C++0x, you can use std::tuple as well.
If the Genie gave you only one wish, you could just wish to have any number of wishes. It's the same with just one return value from a method. You can use your return value as a pointer to an address where an object full of attributes resides and then query those attributes (properties)... This way there really is no limitation. :-)
Fun coding and many happy returns :-)
It's just a decision and because people are used to it. In principle there wouldn't be anything preventing a language designer from implementing a syntax like this:
(int, int, int) call(int x, int y, int z);
and a function call could look like this:
(a, b, c) = call(1, 2, 3);
or whatever syntax they would choose for this task. Though one could discuss if it would add to readability. And as others have pointed out, some languages implement this by tuples or similar constructs.
Sure, the return statement:
(int, int, int) call(int x, int y, int z);
{
return x+1, y+1, z+1
}
You could even think of useful applications like:
(err, filehandle) = OpenFileDialog(...)
where the function can return either a detailed error code or a valid file handle. Though exceptions take this place nowadays. But exceptions are in some sense a way to return at least two alternating values, either the requested function return value or the raised exception.
Because good programming languages encourage programmers to do the right thing. If a method needs to return multiple values, those values probably are related, and thus should be group together in something like a struc.
Just my 2 cents.
It's mostly due to historical reasons having to do with machine calling conventions. Also because C doesn't have a pattern matching syntax on the callee side to retrieve the results. Note that languages like ML or Haskell have a syntactically lightweight tuple type that is perfectly usable for returning multiple values.
Edited:
Actually thinking about it a little bit, I guess if you wanted to split hairs, ML and Haskell still have a "single" return value. It's just that tuples are so lightweight syntactically that it's convenient to think about functions returning multiple values rather than a single tuple.
To be totally rigorous, there are two languages that I can think of that have "proper" multiple-values returns that are not just tuples in some shape. One is Scheme, (c.f call-with-values), and the other is MATLAB:
function [x,y] = myFunc(a, b)
...
end
[p, q] = myFunc(3,4)
In both of these languages, there is a special syntactic distinction between a single value that happens to be an aggregate (cons cell, array, respectively) and multiple values.
It's just a decision made by the language and/or ABI designers. No more, no less. In assembly language, you can make those decisions yourself, though - I'm not sure what your last comment means.
We don't need the ability to return multiple values built into the C++ language, because the library works just fine:
std::tuple<int,float> func()
{
return std::make_tuple(1, 2.f);
}
int i;
float f;
std::tie(i, f) = func();
Most other languages have similar functionality in their standard library.
Actually, there are at least 2 ways to return multiple values.
First is to return create a struct or class, put all the return data, and return it.
second is to pass parameters by reference (non-const) and put the values in there.
It is useful to support this, and given how people find it convenient in other languages, C and Java may move that way too.
C++ is already standardising on the kind of convenient, intuitive handling of return values familiar from e.g. Ruby and python for the function-caller side, which is more important than at the return itself, because a single function is likely called from a great many call sites.
Specifically, the C++17 paper here (see also wording here) documents a notation...
auto [x,y,z] = expression;
...where expression can be a function returning - or any other expression evaluating to - an array, a tuple, or a struct with all public members. The above can be preceded by const to make the local variables const.
The feature also documents this...
for (const auto& [key,value] : mymap)
...
...which avoids repeated use of the less-expressive ->first and ->second.
With C++ moving in this direction, it's likely C and other C-derived languages will look carefully at doing likewise.
How can I pass a primitive type by reference in java? For instance, how do I make an int passed to a method modifiable?
There isn't a way to pass a primitive directly by reference in Java.
A workaround is to instead pass a reference to an instance of a wrapper class, which then contains the primitive as a member field. Such a wrapper class could be extremely simple to write for yourself:
public class IntRef { public int value; }
But how about some pre-built wrapper classes, so we don't have to write our own? OK:
The Apache commons-lang Mutable* classes:
Advantages: Good performance for single threaded use. Completeness.
Disadvantages: Introduces a third-party library dependency. No built-in concurrency controls.
Representative classes: MutableBoolean, MutableByte, MutableDouble, MutableFloat, MutableInt, MutableLong, MutableObject, MutableShort.
The java.util.concurrent.atomic Atomic* classes:
Advantages: Part of the standard Java (1.5+) API. Built-in concurrency controls.
Disadvantages: Small performance hit when used in a single-threaded setting. Missing direct support for some datatypes, e.g. there is no AtomicShort.
Representative classes: AtomicBoolean, AtomicInteger, AtomicLong, and AtomicReference.
Note: As user ColinD shows in his answer, AtomicReference can be used to approximate some of the missing classes, e.g. AtomicShort.
Length 1 primitive array
OscarRyz's answer demonstrates using a length 1 array to "wrap" a primitive value.
Advantages: Quick to write. Performant. No 3rd party library necessary.
Disadvantages: A little dirty. No built-in concurrency controls. Results in code that does not (clearly) self-document: is the array in the method signature there so I can pass multiple values? Or is it here as scaffolding for pass-by-reference emulation?
Also see
The answers to StackOverflow question "Mutable boolean field in Java".
My Opinion
In Java, you should strive to use the above approaches sparingly or not at all. In C it is common to use a function's return value to relay a status code (SUCCESS/FAILURE), while a function's actual output is relayed via one or more out-parameters. In Java, it is best to use Exceptions instead of return codes. This frees up method return values to be used for carrying the actual method output -- a design pattern which most Java programmers find to be more natural than out-parameters.
Nothing in java is passed by reference. It's all passed by value.
Edit: Both primitives and object types are passed by value. You can never alter the passed value/reference and expect the originating value/reference to change. Example:
String a;
int b;
doSomething(a, b);
...
public void doSomething(String myA, int myB) {
// whatever I do to "myA" and "myB" here will never ever ever change
// the "a" and "b"
}
The only way to get around this hurdle, regardless of it being a primitive or reference, is to pass a container object, or use the return value.
With a holder:
private class MyStringHolder {
String a;
MyStringHolder(String a) {
this.a = a;
}
}
MyStringHolder holdA = new MyStringHolder("something");
public void doSomething(MyStringHolder holder) {
// alter holder.a here and it changes.
}
With return value
int b = 42;
b = doSomething(b);
public int doSomething(int b) {
return b + 1;
}
Pass an AtomicInteger, AtomicBoolean, etc. instead. There isn't one for every primitive type, but you can use, say, an AtomicReference<Short> if necessary too.
Do note: there should very rarely be a need to do something like this in Java. When you want to do it, I'd recommend rethinking what you're trying to do and seeing if you can't do it some other way (using a method that returns an int, say... what exactly the best thing to do is will vary from situation to situation).
That's not possible in Java, as an alternative you can wrap it in a single element array.
void demo() {
int [] a = { 0 };
increment ( a )
}
void increment( int [] v ) {
v[0]++;
}
But there are always better options.
You can't. But you can return an integer which is a modified value
int i = 0;
i = doSomething(i);
If you are passing in more than one you may wish to create a Data Transfer Object (a class specifically to contain a set of variables which can be passed to classes).
Pass an object that has that value as a field.
That's not possible in Java
One option is to use classes like java.lang.Integer, then you're not passing a primitive at all.
On the other hand, you can just use code like:
int a = 5;
a = func(a);
and have func return the modified value.
Occasionally , we have to write methods that receive many many arguments , for example :
public void doSomething(Object objA , Object objectB ,Date date1 ,Date date2 ,String str1 ,String str2 )
{
}
When I encounter this kind of problem , I often encapsulate arguments into a map.
Map<Object,Object> params = new HashMap<Object,Object>();
params.put("objA",ObjA) ;
......
public void doSomething(Map<Object,Object> params)
{
// extracting params
Object objA = (Object)params.get("objA");
......
}
This is not a good practice , encapsulate params into a map is totally a waste of efficiency.
The good thing is , the clean signature , easy to add other params with fewest modification .
what's the best practice for this kind of problem ?
In Effective Java, Chapter 7 (Methods), Item 40 (Design method signatures carefully), Bloch writes:
There are three techniques for shortening overly long parameter lists:
break the method into multiple methods, each which require only a subset of the parameters
create helper classes to hold group of parameters (typically static member classes)
adapt the Builder pattern from object construction to method invocation.
For more details, I encourage you to buy the book, it's really worth it.
Using a map with magical String keys is a bad idea. You lose any compile time checking, and it's really unclear what the required parameters are. You'd need to write very complete documentation to make up for it. Will you remember in a few weeks what those Strings are without looking at the code? What if you made a typo? Use the wrong type? You won't find out until you run the code.
Instead use a model. Make a class which will be a container for all those parameters. That way you keep the type safety of Java. You can also pass that object around to other methods, put it in collections, etc.
Of course if the set of parameters isn't used elsewhere or passed around, a dedicated model may be overkill. There's a balance to be struck, so use common sense.
If you have many optional parameters you can create fluent API: replace single method with the chain of methods
exportWithParams().datesBetween(date1,date2)
.format("xml")
.columns("id","name","phone")
.table("angry_robots")
.invoke();
Using static import you can create inner fluent APIs:
... .datesBetween(from(date1).to(date2)) ...
It's called "Introduce Parameter Object". If you find yourself passing same parameter list on several places, just create a class which holds them all.
XXXParameter param = new XXXParameter(objA, objB, date1, date2, str1, str2);
// ...
doSomething(param);
Even if you don't find yourself passing same parameter list so often, that easy refactoring will still improve your code readability, which is always good. If you look at your code 3 months later, it will be easier to comprehend when you need to fix a bug or add a feature.
It's a general philosophy of course, and since you haven't provided any details, I cannot give you more detailed advice either. :-)
First, I'd try to refactor the method. If it's using that many parameters it may be too long any way. Breaking it down would both improve the code and potentially reduce the number of parameters to each method. You might also be able to refactor the entire operation to its own class. Second, I'd look for other instances where I'm using the same (or superset) of the same parameter list. If you have multiple instances, then it likely signals that these properties belong together. In that case, create a class to hold the parameters and use it. Lastly, I'd evaluate whether the number of parameters makes it worth creating a map object to improve code readability. I think this is a personal call -- there is pain each way with this solution and where the trade-off point is may differ. For six parameters I probably wouldn't do it. For 10 I probably would (if none of the other methods worked first).
This is often a problem when constructing objects.
In that case use builder object pattern, it works well if you have big list of parameters and not always need all of them.
You can also adapt it to method invocation.
It also increases readability a lot.
public class BigObject
{
// public getters
// private setters
public static class Buider
{
private A f1;
private B f2;
private C f3;
private D f4;
private E f5;
public Buider setField1(A f1) { this.f1 = f1; return this; }
public Buider setField2(B f2) { this.f2 = f2; return this; }
public Buider setField3(C f3) { this.f3 = f3; return this; }
public Buider setField4(D f4) { this.f4 = f4; return this; }
public Buider setField5(E f5) { this.f5 = f5; return this; }
public BigObject build()
{
BigObject result = new BigObject();
result.setField1(f1);
result.setField2(f2);
result.setField3(f3);
result.setField4(f4);
result.setField5(f5);
return result;
}
}
}
// Usage:
BigObject boo = new BigObject.Builder()
.setField1(/* whatever */)
.setField2(/* whatever */)
.setField3(/* whatever */)
.setField4(/* whatever */)
.setField5(/* whatever */)
.build();
You can also put verification logic into Builder set..() and build() methods.
There is a pattern called as Parameter object.
Idea is to use one object in place of all the parameters. Now even if you need to add parameters later, you just need to add it to the object. The method interface remains same.
You could create a class to hold that data. Needs to be meaningful enough though, but much better than using a map (OMG).
Code Complete* suggests a couple of things:
"Limit the number of a routine's parameters to about seven. Seven is a magic number for people's comprehension" (p 108).
"Put parameters in input-modify-output order ... If several routines use similar parameters, put the similar parameters in a consistent order" (p 105).
Put status or error variables last.
As tvanfosson mentioned, pass only the parts of a structured variables ( objects) that the routine needs. That said, if you're using most of the structured variable in the function, then just pass the whole structure, but be aware that this promotes coupling to some degree.
* First Edition, I know I should update. Also, it's likely that some of this advice may have changed since the second edition was written when OOP was beginning to become more popular.
Using a Map is a simple way to clean the call signature but then you have another problem. You need to look inside the method's body to see what the method expects in that Map, what are the key names or what types the values have.
A cleaner way would be to group all parameters in an object bean but that still does not fix the problem entirely.
What you have here is a design issue. With more than 7 parameters to a method you will start to have problems remembering what they represent and what order they have. From here you will get lots of bugs just by calling the method in wrong parameter order.
You need a better design of the app not a best practice to send lots of parameters.
Good practice would be to refactor. What about these objects means that they should be passed in to this method? Should they be encapsulated into a single object?
Create a bean class, and set the all parameters (setter method) and pass this bean object to the method.
Look at your code, and see why all those parameters are passed in. Sometimes it is possible to refactor the method itself.
Using a map leaves your method vulnerable. What if somebody using your method misspells a parameter name, or posts a string where your method expects a UDT?
Define a Transfer Object . It'll provide you with type-checking at the very least; it may even be possible for you to perform some validation at the point of use instead of within your method.
I would say stick with the way you did it before.
The number of parameters in your example is not a lot, but the alternatives are much more horrible.
Map - There's the efficiency thing that you mentioned, but the bigger problem here are:
Callers don't know what to send you without referring to something
else... Do you have javadocs which states exactly what keys and
values are used? If you do (which is great), then having lots of parameters
isn't a problem either.
It becomes very difficult to accept different argument types. You
can either restrict input parameters to a single type, or use
Map<String, Object> and cast all the values. Both options are
horrible most of the time.
Wrapper objects - this just moves the problem since you need to fill the wrapper object in the first place - instead of directly to your method, it will be to the constructor of the parameter object.
To determine whether moving the problem is appropriate or not depends on the reuse of said object. For instance:
Would not use it: It would only be used once on the first call, so a lot of additional code to deal with 1 line...?
{
AnObject h = obj.callMyMethod(a, b, c, d, e, f, g);
SomeObject i = obj2.callAnotherMethod(a, b, c, h);
FinalResult j = obj3.callAFinalMethod(c, e, f, h, i);
}
May use it: Here, it can do a bit more. First, it can factor the parameters for 3 method calls. it can also perform 2 other lines in itself... so it becomes a state variable in a sense...
{
AnObject h = obj.callMyMethod(a, b, c, d, e, f, g);
e = h.resultOfSomeTransformation();
SomeObject i = obj2.callAnotherMethod(a, b, c, d, e, f, g);
f = i.somethingElse();
FinalResult j = obj3.callAFinalMethod(a, b, c, d, e, f, g, h, i);
}
Builder pattern - this is an anti-pattern in my view. The most desirable error handling mechanism is to detect earlier, not later; but with the builder pattern, calls with missing (programmer did not think to include it) mandatory parameters are moved from compile time to run time. Of course if the programmer intentionally put null or such in the slot, that'll be runtime, but still catching some errors earlier is a much bigger advantage to catering for programmers who refuse to look at the parameter names of the method they are calling.
I find it only appropriate when dealing with large number of optional parameters, and even then, the benefit is marginal at best. I am very much against the builder "pattern".
The other thing people forget to consider is the role of the IDE in all this.
When methods have parameters, IDEs generate most of the code for you, and you have the red lines reminding you what you need to supply/set. When using option 3... you lose this completely. It's now up to the programmer to get it right, and there's no cues during coding and compile time... the programmer must test it to find out.
Furthermore, options 2 and 3, if adopted wide spread unnecessarily, have long term negative implications in terms of maintenance due to the large amount of duplicate code it generates. The more code there is, the more there is to maintain, the more time and money is spent to maintain it.
This is often an indication that your class holds more than one responsibility (i.e., your class does TOO much).
See The Single Responsibility Principle
for further details.
If you are passing too many parameters then try to refactor the method. Maybe it is doing a lot of things that it is not suppose to do. If that is not the case then try substituting the parameters with a single class. This way you can encapsulate everything in a single class instance and pass the instance around and not the parameters.
... and Bob's your uncle: No-hassle fancy-pants APIs for object creation!
https://projectlombok.org/features/Builder