I need to programmatically rename identifiers within a given scope, for example, a method in java program. For example, given the following java function:
public void doSomething(){
int x = 10;
int y = x * 2;
int z = x + y;
}
after renaming the variables (x to a, y to b, and z to c) I should obtain the following function:
public void doSomething(){
int a = 10;
int b = a * 2;
int c = a + b;
}
How can I programmatically implement such renaming of identifiers and their references?
I have been looking into Eclipse AST and Java Model. In either case I have to implement search for all occurrences of any given identifier, and then replace them. I am wondering if there is a better way to do this (how the Eclipse Refactoring UI supports such variable renaming)? Or, should I look into the Language Toolkit (org.eclipse.ltk.core.refactoring)? Any tutorial, sample code, or suggestion?
Please help.
I'm unclear as whether you wish to perform this renaming across the board (i.e. all methods in all classes) or just to specific classes which you would identify manually. Assuming the latter, I would recommend that you:
Use the Java reflection API java.lang.reflect to identify all methods defined within the particular class that requires local
variable renaming.
Iterate across all methods. For each method, use the org.eclipse.jdt.core API to descend through the hierarchy of compilation elements, selecting locally scoped variable definitions. Get the name of the variable for the
compilation unit.
Generate your new variable name. Something like: Old ==> OldRenamed. Apply whatever renaming heuristic you wish at this point.
Invoke the Eclipse development kit org.eclipse.jdt.ui.refactoring API methods to do the variable renaming
within the method. Effectively you would be invoking the Eclipse Rename Variable functionality
headlessly by using this API.
That should do the trick.
I am able to rename using the following code.
RenameSupport renameSupport = RenameSupport.create(field, newName, RenameSupport.UPDATE_REFERENCES);
renameSupport.perform(workbench.getShell(), workbench);
But it applies the changes to the actual source files. Is there Anyway that can be prevented? I just need the renamed code internally, must not change the actual source.
I really question the wisdom of doing this, but it can be done using an annotation processor. You will have to accept that the annotation processor can not modify existing source code but can create new source code that is basically a clone of the original (except for the renames).
Create annotations that define the search and replace.
Annotate your source code. You can create a source file package-info.java and annotate at the package level.
Your processor could use the Compiler Tree API via the Trees.instance method.
You could extend SimpleTreeVisitor to do the actual copying. It will be tedious, but not very complicated.
The only use case that I can think of for this is that you have prepared source code with variable names that make sense in human language A (e.g. Japanese) and you wish to discuss it with an audience (boss, students, clients, etc.) who are more comfortable with human language B (e.g. Arabic)
Other than that, I can not think of any good reason to go to so much trouble. In the example given, x, y, and z are are local variables. Since there scope is so small, there is really no pressing need to rename.
Select variable
Click right mouse button
Select refactor
Select rename
rename variable
Eclipse will rename this variable in all place it's used
Related
I have a class where I have few constant values assigned and I want to refer to them in another class which belongs to other project.
Lets say i have a below class :
public class sampleConstants {
private static final String SOME_VALUE = "ABC";
private static final String SOME_VALUE2 = "DEF";
.
.
.
.
}
And the class "sampleConstants" belongs to project "X" and i want to refer to the constant variables from this class in anpother class which belongs to project "Y".
Is it possible ? if yes, Please let me know.
You have to first get clear on your own requirements: when you change one of those constants, does that mean that you should immediately recompile both projects?
Or is it OK that you define SOME_VALUE=1 within X today, and you compile and create a JAR with that constant ... and import that within Y. And when you happen to change SOME_VALUE=2 tomorrow, your project Y can continue to work with the initial value for some time?
Thus, the real point here is: depending on how these constants are used, the other solution can be fully okay, but it can also be the wrong approach. You see, when your two projects are tightly coupled, and changing the constant in one place makes it necessary to change it in the second, too ... then the real answer could be to carefully look at the way you defined your projects. Maybe, just maybe, it isn't a good solution to have code that needs these constants to reside in two different projects.
Maybe the real solution would be to ensure that all code that uses these constants sits in a single project.
you can add project "X" jar into project "Y" and after that you can access constant class.
Obviously you can add a project with constants as a jar dependency for another your project. Any build tool will allow it to you,
but it's better to consider moving such values in *.properties file and keep them separately from java code.
I have some (maybe) strange requirements - I wanted to detect definitions of local (method) variables of a given interface name. When finding such a variable I would like to detect which methods (set/get*) will be called on this variable.
I tried Javassist without luck, and now I have a deeper look into ASM, but not sure if it is possible what I wanted.
The reason for this is that I like to generated a dependency graph with GraphViz of beans that depend on the same data structure.
If this thing is possible could somebody please give me a hint on how it could be done? Maybe there are other Frameworks that could do?
01.09.2015
To make things more clear:
The interface is self written - the target of the whole action is to create a dependency graph in the first step automatically - later on a graphical editor should be implemented that is based on the dependencies.
I wonder how FindBugs/PMD work, because they also use the byte code and detect for example null pointer calls (variable not initialized and method will be called on it). So I thought that I could implement my idea in the same way. The whole code is Spring based - maybe this opens another solution to the point? Last but not least I could work on a source-jar?
While thinging about the problem - would it be possible via ASM/javassist to detect all available methods from the interface and find calls to them in the other classes?
I’m afraid, what you want to do is not possible. In compiled Java code, there are no local variables in the form you have in the source code. Methods use stack frames which have memory reserved for local variables, which is addressed by a numerical index. The type is implied by what instructions write to it and may change throughout the method’s code as the memory may get reused for different variables having a disjunct scope. The names on the other hand are completely irrelevant.
When bytecode gets verified, the effect of all instructions to the stack frame will get modeled to infer the type of each stack frame slot at each point of the execution so that the validity of all operations can be checked. Starting with class file version 50, there will be StackMapTable attributes aiding the process by containing explicit type information, but only for code with branches. For sequential code, the type of variables still has to be derived by inference.
These inferred types are not necessarily the declared types. E.g., on the byte code level, there will be no difference between
CharSequence cs="foo";
cs.charAt(0);
and
String s="foo";
((CharSequence)s).charAt(0);
In both cases, there will be a storage of a String constant into a local variable followed by the invocation of an interface method. The inferred type will be String in both cases and the invocation of a CharSequence method considered valid as String implements CharSequence.
This disproves the idea of detecting that there is a local variable declared using the CharSequence (interface) type, as the actual declared type is irrelevant and not stored in the regular byte code.
There are, however, debugging attributes containing information about the local variables, see the LocalVariableTable attribute and libraries like ASM will tell you about the declarations if such information is present. But you can’t rely on these optional information. E.g. Oracle’s JRE libraries are by default shipped without them.
I don't sure I understood exacly what you want but .
you can use implement on each object ,
evry object that have getter you can implement it with class called getable .
and then you could do stuff only on object that have the function that you implement from the class getable .
https://docs.oracle.com/javase/tutorial/java/IandI/createinterface.html
Similar Questions: Here and Here
I guess the situation is pretty uncommon to begin with, and so I admit it is probably too localized for SO.
The Problem
public class bqf implements azj
{
...
public static float b = 0.0F;
...
public void b(...)
{
...
/* b, in both references below,
* is meant to be a class (in the
* default package)
*
* It is being obscured by field
* b on the right side of the
* expression.
*/
b var13 = b.a(var9, var2, new br());
...
}
}
The error is: cannot invoke a(aji, String, br) on primitive type float.
Compromisable limitations:
Field b cannot be renamed.
Class b cannot be renamed or refactored.
Why
I am modifying an obfuscated program. For irrelevant[?], unknown (to me), and uncompromisable reasons the modification must be done via patching the original jar with .class files. Hence, renaming the public field b or class b would require modifying much of the program. Because all of the classes are in the default package, refactoring class b would require me to modify every class which references b (much of the program). Nevertheless there is a substantial amount of modification I do intend on doing, and it is a pain to do it at the bytecode level; just not enough to warrant renaming/refactoring.
Possible Solutions
The most obvious one is to rename/refactor. There are thousands of classes, and every single one is in the default package. It seems like every java program I want to modify has that sort of obfuscation. : (
Anyways sometimes I do take the time to just go about manually renaming/refactoring the program. But when when there's too many errors (I once did 18,000), this is not a viable option.
The second obvious option is to do it in bytecode (via ASM). Sometimes this is ok, when the modifications are small or simple enough. Unfortunately doing bytecode modifications on only the files which I can't compile through java (which is most of them, but this is what I usually try to do) is painfully slow by comparison.
Sometimes I can extend class b, and use that in my modified class. Obviously this won't always work, for example when b is an enum. Unfortunately this means a lot of extra classes.
It may be possible to create a class with static wrapper methods to avoid obscurity. I just thought of this.
A tool which remaps all of the names (not deobfuscate, just unique names), then unmaps them after you make modifications. That would be sweet. I should make one if it doesn't exist.
The problem would also be solved with a way to force the java compiler to require the keyword "this".
b.a(var9, var2, new br());
can easily be rewritten using reflection:
Class.forName("b").getMethod("a", argTypes...).invoke(null, var9, var2, new br());
The problem would also be solved with a way to force the java compiler to require the keyword "this".
I don't think how this would help you for a static member. Compiler would have to require us to qualify everything—basically, disallow simple names altogether except for locals.
Write a helper method elsewhere that invokes b.a(). You can then call that.
Note: In Java the convention is that the class would be named B and not b(which goes for bqf and aqz too) and if that had been followed the problem would not have shown.
The real, long time cure, is not to put classes in the default package.
This question already has answers here:
Why are forward declarations necessary? [duplicate]
(6 answers)
Closed 9 years ago.
I've taken a Java course and am trying to teach myself C with K&R. So far so good but I don't understand the purpose of prototypes. See the 2 // comments in the code below:
#include <stdio.h>
float convert(int); **//Why is this needed...**
main()
{
int i;
for(i = 0; i <= 300; i += 20)
printf("F: %3d C: %6.1f\n",i,convert(i));
system("Pause");
return 0;
}
float convert(int f) **//When we already have this?**
{
float c = (5.0/9.0) * (f-32.0);
return c;
}
In Java, you'd declare a function something like public static float convert(int f) and not need a prototype at all. That seems much simpler to me. Why the difference?
This is essentially a decision made for a language system.
Note that both the Java and C compilers need to know the function signature so that they can do type checking, and compile the code.
Also note that languages like C need you to supply this signature / prototype separately (in a declaration), when in fact the function definition has the exact same information. So, it is basically a repetition of the information. Why is this? Essentially, this is so that code can be compiled in the absence of the actual source code that contains the definition. So, if a library is supplied as binary code, then having the headers which contain the prototypes is enough to allow the compilation of other code that uses the code from the library.
More modern languages like Java and C# do away with the need to repeat this prototype information. How do they then compile code, when they do need the prototype? What they do is store the prototype information along with the binary code, at the time they process the definition. So, really, the prototype information is just auto generated by the compiler itself.
The Java compiler can find a class via it's name and package and check the source directly. Note: if the Java compiler cannot do this it won't compile.
In C, there is no restriction on what definitions you can place where and so you have to first let it know what you might be defining later.
In C an identifier usually has to be declared before it can be used. A function prototype serves as a declaration for a function. C is an old language and forcing the programmer to declare function identifiers helped the programming of the compiler / linker, particularly when functions are used and defined in different translation units.
Both C and Java check at compile time that the function call matches the function signature.
A C compiler always relies on a function declaration/prototype in source code. The function declaration must appear before the call.
A Java compiler may obtain the function declaration from:
Anywhere in the same top-level class. The function definition does not have to be placed above all calls.
From another's class's source code or a compiled *.class file. The fully qualified class name allows the *.class file to be found in the classpath.
I'm a bit confused as to why languages have these. I'm a Java programmer and at the start of my career so Java is the only language I've written in since I started to actually, you know, get it.
So in Java of course we don't have properties and we write getThis() and setThat(...) methods.
What would we gain by having properties?
Thanks.
EDIT: another query: what naming conventions arise in languages with properties?
Which one looks more natural to you?
// A
person.setAge(25)
// B
person.age = 25;
// or
person.Age = 25; //depending on conventions, but that's beside the point
Most people will answer B.
It's not only syntaxic sugar, it also helps when doing reflection; you can actually make the difference between data and operations without resorting to the name of the methods.
Here is an example in C# for those not familiar with properties:
class Person
{
public int Age
{
set
{
if(value<0)
throw new ArgumentOutOfRangeException();
OnChanged();
age = value;
}
get { return age; }
}
private int age;
protected virtual void OnChanged() { // ... }
}
Also, most people always use properties rather than promote a public member later for the same reason we always use get/set; there is no need to rewrite the old client code bound to data members.
The syntax is much nicer:
button.Location += delta;
than:
button.setLocation(button.getLocation() + delta);
Edit:
The code below assumes that you are doing everything by hand. In my example world the compiler would generate the simple get/set methods and convert all direct variable access to those methods. If that didn't then the client code would have to be recompiled which defeats a big part of the purpose.
Original:
The main argument for properties is that it removes the need to recompile your code if you go from a variable to a method.
For instance:
public class Foo
{
public int bar;
}
If we later decided to validation to "bar" we would need to do this:
public class Foo
{
private int bar;
public void setBar(final int val)
{
if(val <= 0)
{
throw new IllegalArgumentException("val must be > 0, was: " + val);
}
bar = val;
}
public int getBar()
{
return (bar);
}
}
But adding the set/get method would break all of the code. If it was done via properties then you would be able to add the validation after the fact without breaking client code.
I personally don't like the idea - I am much happier with the idea of using annotation and having the simple set/get geterated automatically with the ability to profive your own set/get implementations as needed (but I don't like hidden method calls).
Two reasons:
Cleaned/terser syntax; and
It more clearly indicates to the user of the class the difference between state (properties) and behaviour (methods).
In Java the getters and setters are in essence properties.
In other modern languages (c#) , etc it just makes the syntax easier to work with/comprehend.
They are unnecessary, and there are workarounds in most cases.
It's really a matter of preference, but if the language you're using supports them I would recommend using them :)
I struggled with this at first, too, however I've really come to appreciate them. The way I see it, properties allow me to interact with the exposed data in a natural way without losing the encapsulation provided by getter/setter methods. In other words, I can treat my properties as fields but without really exposing the actual fields if I choose not to. With automatic properties in C# 3.0 it gets even better as for most fields -- where I want to allow the consumer to read/write the data -- I have even less to write:
public string Prop { get; set; }
In the case where I want partial visibility, I can restrict just the accessor I want easily.
public string Prop { get; private set; }
All of this can be done with getter/setter methods, but the verbiage is much higher and the usage is much less natural.
A general rule of object oriented programming is that you never change an existing interface. This ensures that while in inner content may change for the objects calling the object don't need to know this.
Properties in other languages are methods masquerading as a specific language feature. In Java a property is distinguished only by convention. While in general this works, there are cases where it limits you. For example sometimes you would to use hasSomething instead of isSomething of getSomething.
So it allows flexibility of names, while tools and other code depending on can still tell the difference.
Also the code can be more compact and the get and set are grouped together by design.
In Object Oriented Software Construction 2 Bertrand Meyer calls this the "Uniform Access Principle" and the general idea is that when a property goes from a simple one (i.e. just an integer) to a derived one (a function call), the people using it shouldn't have to know.
You don't want everyone using your code to have to change from
int x = foo.y;
to
int x = foo.y();
That breaks encapsulation because you haven't changed your "interface" just your "implementation".
You can also create derived fields and read-only/write-only fields. Most Properties that I've seen in languages I've worked in allow you to not only assign simple fields, but also full functions to properties.
Properties provide a simple method to abstract the details behind a set of logic in an object down to a single value to the outside world.
While your property may start out only as a value, this abstraction decouples the interface such that it's details can be changed later with minimal impact.
A general rule of thumb is that abstraction and loose coupling are good things. Properties are a pattern that achieve both.
Properties at the language level are a bad idea. There's no good convention for them and they hide performance deficits in the code.
It's all about bindings
There was a time when I considered properties to just be syntactic sugar (i.e. help the developer by having them type a bit less). As I've done more and more GUI development, and started using binding frameworks (JGoodies, JSR295), I have discovered that language level properties are much, much more than syntactic sugar.
In a binding scenario, you essentially define rules that say 'property X of object A should always be equal to property Y of object B'. Shorthand is: A.x <-> B.y
Now, imagine how you would go about actually writing a binding library in Java. Right now, it is absolutely not possible to refer to 'x' or 'y' directly as language primitives. You can only refer to them as strings (and access them via reflection). In essence, A."x" <-> B."y"
This causes massive, massive problems when you go to refactor code.
There are additional considerations, including proper implementation of property change notifications. If you look at my code, every blessed setter requires a minimum of 3 lines to do something that is incredibly simple. Plus one of those 3 lines includes yet another string:
public void setFoo(Foo foo){
Foo old = getFoo();
this.foo = foo;
changeSupport.firePropertyChange("foo", old, foo);
}
all of these strings floating around is a complete nightmare.
Now, imagine if a property was a first class citizen in the language. This starts to provide almost endless possibilities (for example, imagine registering a listener with a Property directly instead of having to muck with PropertyChangeSupport and it's 3 mystery methods that have to get added to every class). Imagine being able to pass the property itself (not the value of the property, but the Property object) into a binding framework.
For web tier developers, imagine a web framework that can build it's own form id values from the names of the properties themselves (something like registerFormProperties(myObject.firstname, myObject.lastname, someOtherObject.amount) to allow for round-trip population of object property values when the form is submitted back to the server. Right now to do that, you'd have to pass strings in, and refactoring becomes a headache (refactoring actually becomes downright scary once you are relying on strings and reflection to wire things up).
So anyway, For those of us who are dealing with dynamic data updates via binding, properties are a much needed feature in the language - way more than just syntactic sugar.