I want to change a method using BCEL. But I do not know how to update the Exception table. Here's simplified code:
ConstantPoolGen poolGen = classGen.getConstantPool();
InstructionList iList = new InstructionList(method.getCode().getCode());
MethodGen newMethodGen = new MethodGen(method, classGen.getClassName(), poolGen);
for (InstructionHandle handle : iList.getInstructionHandles().clone()) {
if (I_WANT_TO_WRAP_IT(handle)) {
iList.insert(handle, MAKE_WRAPPER(handle));
iList.delete(handle);
}
}
classGen.removeMethod(method);
newMethodGen.setMaxStack();
newMethodGen.setMaxLocals();
classGen.addMethod(newMethodGen.getMethod());
After this bytecode is properly modified but exception table is unchanged with leads to ClassFormatError because exception table points to nonexisting PC. Any idea how to deal with this?
Normally you don’t need to deal with this as BCEL should take care of it. It seems to me that your mistake is to use a different instruction list than MethodGen. So you’re modifying the underlying code but the offsets are not processed correctly.
Try to use
MethodGen newMethodGen = new MethodGen(method, classGen.getClassName(), poolGen);
InstructionList iList = newMethodGen.getInstructionList();
to ensure that the same list is used by your code and MethodGen.
Related
I want to extract signature changes (method parameter changes to be exact) from commits to git repository by a java program. I have used the following code:
for (Ref branch : branches) {
String branchName = branch.getName();
for (RevCommit commit : commits) {
boolean foundInThisBranch = false;
RevCommit targetCommit = walk.parseCommit(repo.resolve(
commit.getName()));
for (Map.Entry<String, Ref> e : repo.getAllRefs().entrySet()) {
if (e.getKey().startsWith(Constants.R_HEADS)) {
if (walk.isMergedInto(targetCommit, walk.parseCommit(
e.getValue().getObjectId()))) {
String foundInBranch = e.getValue().getName();
if (branchName.equals(foundInBranch)) {
foundInThisBranch = true;
break;
}
}
}
}
I can extract commit message, commit data and Author name from that, however, I am not able to extract parameter changes from them. I mean it is unable for me to identify parameter changes. I want to know if there is any way to recognize that. I mean it is impossible to recognize them from commit notes that are generated by programmers; I am looking for something like any specific annotation or something else.
This is my code to extract differences:
CanonicalTreeParser oldTreeIter = new CanonicalTreeParser();
oldTreeIter.reset(reader, oldId);
CanonicalTreeParser newTreeIter = new CanonicalTreeParser();
newTreeIter.reset(reader, headId);
List<DiffEntry> diffs= git.diff()
.setNewTree(newTreeIter)
.setOldTree(oldTreeIter)
.call();
ByteArrayOutputStream out = new ByteArrayOutputStream();
DiffFormatter df = new DiffFormatter(out);
df.setRepository(git.getRepository());
The export is really huge and impossible to extract method changes.
You show a way you've found to examine the diffs, but say that the output is too large and you can't extract the method signature changes. If by that you mean that you're asking about specific git support for telling you that a method signature changes, then no - no such support exists. This is because git does not "know" anything about the languages you may or may not have used in the files under source control. Everything is just content that is, or is not, different from other content.
Since a method signature could be split across lines in any number of ways, it's not even guaranteed that just because a method's signature changed its name would appear anywhere in the diff. What you would really have to do is perform a sort of "structural diff". That is, you would have to
check out the "old" version, and pass it to a java parser
check out the "new" version, and pass it to a java parser
compare the resulting parse trees, looking for methods that belong to the same object, but have changed
Even that won't be terribly easy, because methods could be renamed, and because method overloading could make it unclear which signature change goes with which version of a method.
From there what you have is a non-trivial coding problem, which is beyond the scope of SO to answer. If you decide to tackle this problem and run into specific programming questions along the way, of course you could post those questions and perhaps someone will be able to help.
I know that there are a few question already in this forum relating to my question, but none of them really seems the help me.
Since I am new to Coding I am still trying to figure out what exactly getClass() and getMethod() calls help me with.
What I want to accomplish:
// init:
List<Preview> listPreview;
List<Preview> listTemp;
// now create the Lists (from a Database)
listPreview = dbHelper.getPreview("Hero", "Axe");
listTemp = dbHelper.getPreview("Hero", "Beastmaster");
// now I want to add ListTemp to ListPreview
Class myClass = listPreview.getClass();
Method m = myClass.getDeclaredMethod("add", new Class[] {Object.class});
m.invoke(listTemp, 2);
The Problem:
Obviously this is not working right now, but I think the idea is pretty straight forward. I want to add listTemp to listPreview. The getDeclaredMethod is already considered a undeclared exception I do not really understand why.
If you want to add two list one after another just use this:
listPreview.addAll(listTemp);
This is relatively simple. Why don't you use listPreview.addAll(listTemp);. This will add all the elements in listTemp to listPreview.
If you want to add the elements of List with your approach, use the below code.
Class myClass = listPreview.getClass();
Method m = myClass.getDeclaredMethod("addAll", Collection.class);
m.invoke(listPreview, listTemp);
OR
For a simpler way, you can use
listPreview.addAll(listTemp);
The error is
getDeclaredMethod is already considered a undeclared exception
Which means there are unreporteds exception must be caught or declared to be thrown.
so below is a complete sample:
try {
Class myClass = listPreview.getClass();
Method m = myClass.getDeclaredMethod("addAll", Collection.class);
m.invoke(listPreview, listTemp);
}
catch (Throwable e) {
System.err.println(e);
}
If I have a line like this:
var.getSomething().getSomethingElse().setNewValue(stuff.getValue().getWhatever());
If that line creates a NullPointerException, is there any way of finding out which method is returning a null value?
I believe I was able to split the line at every dot and get the exception showing which line was failing. But I can't get that to work anymore (maybe I remember incorrectly).
Is the only good debugging possibility to write it like this?
a = var.getSomething();
b = a.getSomehingElse();
c = stuff.getValue();
d = c.getWhatever();
b.setNewValue(d);
With this I should be able to easily see where the exception happens. But it feels inefficient and ugly to write this way.
I use Android Studio. Used Eclipse before but moved to Android Studio some time ago.
You might want to put every part into "Watches":
But I'm pretty sure that both Eclipse and Android Studio would let you inspect the content by just a selection of the part you' re interested in (if you are in debug mode)
The best I can advice for you is to use #Nullable and #NonNull annotations for all methods with return values. It would not help you to get line where null pointer is but would help to prevent such situations in future.
So if method may return null and you have it in call sequence you will get warning from Android Studio about this. In this case it is better to break sequence and check for null.
For example:
private static class Seq {
private final Random rand = new Random();
#NonNull
public Seq nonNull() {
return new Seq();
}
#Nullable
public Seq nullable() {
return rand.nextInt() % 100 > 50 ? new Seq() : null;
}
}
If you write new Seq().nonNull().nonNull().nullable().nonNull(); you will get warning from IDE:
Method invocation `new Seq().nonNull().nonNull().nullable().nonNull()` may produce 'java.lang.NullPointerException'
The best solution in this case is to change code like so:
Seq seq = new Seq().nonNull().nonNull().nullable();
if (seq != null) {
seq.nonNull();
}
Don't forget to add it into Gradle build script
compile 'com.android.support:support-annotations:22.+'
I am not positive on the way you are doing it. This makes your code tightly coupled and not unit testable.
var.getSomething().getSomethingElse().setNewValue(stuff.getValue().getWhatever());
Instead do something like
var.getSomething();
that get something internally does whatever you are doing as a part of
getSomethingElse().setNewValue(stuff.getValue().getWhatever())
In the same way getSomethingElse() should perform whatever you are doing as a part of
setNewValue(stuff.getValue().getWhatever())
Part of a program I am working on requires looking up preprocessor macros by name, and then getting their values. I opted to use the CDT Indexer API. In order to make sure I am on the right track, I wrote a test method that does nothing but create a simple C file and confirm that it can find certain symbols in the index. However, I failed to get that test to run properly. Attempting to use IIndex.findBindings(char[], IndexFilter, IProgressMonitor) returns empty arrays for symbols that I know exist in the AST because they are part of the example file in the test method.
I can't post the exact test method because I use some custom classes and it would be overkill to post all of them, so I will just post the important code. First, my example file:
final String exampleCode =
"#define HEAVY 20\n" +
"#define TEST 5\n" +
"void function() { }\n" +
"int main() { return 0; }\n";
IFile exampleFile = testProject.getFile("findCodeFromIndex.c");
exampleFile.create(new ByteArrayInputStream(exampleCode.getBytes("UTF-8") ), true, null);
I have a custom class that automatically gets the IASTTranslationUnit from that file. The translation unit is fine (I can see the nodes making up everything except the macros). I get the index from that AST, and the code I use to look up in the index is
try {
index.acquireReadLock();
returnBinding = index.findBindings(name.toCharArray(), IndexFilter.ALL, null);
... catch stuff...
} finally {
index.releaseReadLock();
}
Where 'name' is going to be either "HEAVY", "TEST", or "function". None of them are found, despite existing in the example test c file.
I am guessing that the issue is the index is not rebuilt, which causes findBindings to return an empty array even if I know the given variable name exists in the AST.
My current attempt to start up the indexer looks like this:
final ICProject cProject = CoreModel.getDefault().getCModel().getCProject(testProject.getName());
CCorePlugin.getIndexManager().reindex(cProject);
CCorePlugin.getIndexManager().joinIndexer(IIndexManager.FOREVER, new NullProgressMonitor() );
Question Breakdown:
1) Is my method for searching the index sound?
2) If the issue is the index needing to be rebuilt, how should I properly force the index to be up to date for my test methods? Otherwise, what exactly is the reason I am not resolving the bindings for macros/functions I know exist?
I solved my own issue so I will post it here. I was correct in my comment that the lack of the project being a proper C project hindered the Indexer from working properly, however I also discovered I had to use a different method in the indexer to get the macros I needed.
Setting up the test enviornment:
Here is the code I have that creates a basic C project. The only purpose it serves is to allow the indexer to work for test methods. Still, it is large:
public static IProject createBareCProject(String name) throws Exception {
IProject bareProjectHandle = ResourcesPlugin.getWorkspace().getRoot().getProject(name);
IProjectDescription description =
bareProjectHandle.getWorkspace().newProjectDescription("TestProject");
description.setLocationURI(bareProjectHandle.getLocationURI() );
IProject bareProject =
CCorePlugin.getDefault().createCDTProject(description, bareProjectHandle, new NullProgressMonitor() );
IManagedBuildInfo buildInfo = ManagedBuildManager.createBuildInfo(bareProject);
IManagedProject projectManaged =
ManagedBuildManager
.createManagedProject(bareProject,
ManagedBuildManager.getExtensionProjectType("cdt.managedbuild.target.gnu.mingw.exe") );
List<IConfiguration> configs = getValidConfigsForPlatform();
IConfiguration config =
projectManaged.createConfiguration(
configs.get(0),
ManagedBuildManager.calculateChildId(configs.get(0).getId(), null));
ICProjectDescription cDescription =
CoreModel.getDefault().getProjectDescriptionManager().createProjectDescription(bareProject, false);
ICConfigurationDescription cConfigDescription =
cDescription.createConfiguration(ManagedBuildManager.CFG_DATA_PROVIDER_ID, config.getConfigurationData() );
cDescription.setActiveConfiguration(cConfigDescription);
cConfigDescription.setSourceEntries(null);
IFolder srcFolder = bareProject.getFolder("src");
srcFolder.create(true, true, null);
ICSourceEntry srcFolderEntry = new CSourceEntry(srcFolder, null, ICSettingEntry.RESOLVED);
cConfigDescription.setSourceEntries(new ICSourceEntry[] { srcFolderEntry });
buildInfo.setManagedProject(projectManaged);
cDescription.setCdtProjectCreated();
IIndexManager indexMgr = CCorePlugin.getIndexManager();
ICProject cProject = CoreModel.getDefault().getCModel().getCProject(bareProject.getName() );
indexMgr.setIndexerId(cProject, IPDOMManager.ID_FAST_INDEXER);
CoreModel.getDefault().setProjectDescription(bareProject, cDescription);
ManagedBuildManager.setDefaultConfiguration(bareProject, config );
ManagedBuildManager.setSelectedConfiguration(bareProject, config );
ManagedBuildManager.setNewProjectVersion(bareProject);
ManagedBuildManager.saveBuildInfo(bareProject, true);
return bareProject;
}
As I discovered when debugging, it is indeed important to set proper configurations and descriptions as the indexer was postponed so long as the project didn't have those features set. To get the configurations for the platform as a starting point for an initial configuration:
public static List<IConfiguration> getValidConfigsForPlatform() {
List<IConfiguration> configurations =
new ArrayList<IConfiguration>();
for (IConfiguration cfg : ManagedBuildManager.getExtensionConfigurations() ) {
IToolChain currentToolChain =
cfg.getToolChain();
if ( (currentToolChain != null ) &&
(ManagedBuildManager.isPlatformOk(currentToolChain) ) &&
(currentToolChain.isSupported() ) ) {
configurations.add(cfg);
}
}
return configurations;
}
This basically answers the second part of the question, and thus I can create a c project for the purposes of testing code using the index. The testing code still needs to do some work.
Testing Code
I create files in the the "src" folder in the project (created in the above code), and I either have to name them .c, or if I want to name them .h have them included by some .c file (otherwise the indexer won't see them). Finally, I can populate the files with some test code. To answer number 1,
I need to block on both auto refresh jobs in Eclipse and then the index:
public static void forceIndexUpdate(IProject project) throws Exception {
ICProject cProject = CoreModel.getDefault().create(project);
Job.getJobManager().join(ResourcesPlugin.FAMILY_AUTO_REFRESH, null);
CCorePlugin.getIndexManager().reindex(cProject);
CCorePlugin.getIndexManager().joinIndexer(IIndexManager.FOREVER, new NullProgressMonitor() );
assertTrue(CCorePlugin.getIndexManager().isIndexerIdle() );
assertFalse(CCorePlugin.getIndexManager().isIndexerSetupPostponed(cProject));
}
After I change the files in the project. This makes sure Eclipse is refreshed, and then makes sure the indexer completes without being postponed. Finally, I can run tests depending on the indexer.
And the last point, I was wrong about using IBinding. The correct way in which I was able to get the macros was using the method IIndex.findMacros(char[] name, IndexFilter filter, IProgressMonitor monitor)
I hope this helps at least someone out there. I would also appreciate it if there was some feedback regarding the validity of this solution, as this is simply the first solution I managed to create that worked. Just to confirm, I am not testing the indexer itself, but rather code I wrote that uses the indexer and I want to test it under as realistic conditions as I can given how critical it is.
At first I misunderstood my problem and posted this question : Can someone explain me Cascading and FetchType lazy in Ektorp?
What I need to do: I need to save an Entity in couchdb and then have way to read it and potentially ignore some fields.
So I came up with this solution: I create a show function that delete fields from an object and then send it back.
function(doc, req) {
var result = doc;
var ignore = JSON.parse(decodeURIComponent(req.query.ignore)); //this is an array of field names
for (var i = 0, j = ignore.length; i < j; i++) {
if (result[ignore[i]]) {
delete result[ignore[i]];
}
}
return {
body : JSON.stringify(result),
headers : {
"Content-Type" : "application/json"
}
};
}
I have the same function but reversed in which the object keeps the fields I tell the function to keep.
Is there a better way to do this?
I also want to use Ektorp to call this but it allows me to only call Views. Right now I am forced to manage the http request myself. Is there a way to avoid this?
Right now this is the code I must use, but I would like to use Ektorp to do this.
HttpClient httpClient = new StdHttpClient.Builder().url("http://localhost:5984").build();
CouchDbInstance dbInstance = new StdCouchDbInstance(httpClient);
CouchDbConnector db = new StdCouchDbConnector("mydatabase", dbInstance);
db.createDatabaseIfNotExists();
String[] forget = new String[] { "field_to_ignore" };
String uri = "/mydatabase/_design/mydesigndoc/_show/ignorefields/mydocid?ignore=" + URLEncoder.encode(Json.stringify(Json.toJson(forget)), "UTF-8");
System.out.println(uri);
HttpResponse r = db.getConnection().get(uri);
String stuff = new Scanner(r.getContent()).useDelimiter("\\A").next();
System.out.println(stuff);
A show function isn't a terrible idea, from a CouchDB point of view. Ektorp may not support them, presumably because they're not hugely used, but Ektorp's open-source and on Github; you could easily just add this functionality, especially since you already have the basics of a working implementation of it.
Alternatively you could just build a view that does this for a given set of fields. You can't really parameterize this though, so you'd need well-defined sets of fields you know beforehand.
Finally I'd suggest either pulling the whole document and not worrying about it (unless you're in an extremely hugely bandwidth-limited situation it's probably not going to matter), or splitting the document into the constituent parts you're querying for and requesting them independently, if that's definitely the unusual case.