I just received a third party authentication library to use in my clients application. I didn't receive any documentation with it and am trying to dig through the source and see how it works. I'm very to new Java when i click Go To -> Declaration on methods in IntelliJ it sends me to a .class file and i see a bunch of stubbed methods with /* compiled code */ in the methods.
I'm fairly sure this is common in Java i just don't know what to search for to learn about what exactly is going on. Any clarification would be great.
This typically meant that you don't have the source code, and IntelliJ IDEA would just display /* compiled code */ as a placeholder for the source code you don't have. I believe this has now changed, and IntelliJ comes bundled with a full Java decompiler plugin, and will display the decompiled source code as standard.
To better see what's going on, the best would be to receive the actual source code of the third party library.
You should of course also get the documentation, as reading the source code and guessing how to use a library usually isn't the best way to learn.
The second best option would be use the decompiler plugin in IntelliJ, that will automatically decompile the Java class file (note that the license for your third party library may disallow you to do just that). This will never be a 100% solution, but in most cases it's better than nothing.
You should really search/ask for documentation. Javadoc usually is invaluable if the method does stuff you can't guess from its name. Otherwise use a decompiler such as JD-GUI.
.java sourcecode is compiled to .class bytecode by compilers such as javac. The compiler may optimise specific things, and a compilation-decompilation process if highly unlikely to yield the same source. Also, all comments should be deleted and if the code wasn't compiled in debug mode, even the variable names are lost. So: Decompilation is not a good alternative to handcrafted documentation.
If your library is build with sourceFiles:
task androidSourcesJar(type: Jar) {
classifier = 'sources'
from android.sourceSets.main.java.sourceFiles//look at this line
}
Then you will see classes with /* compiled code */
If your library is build with srcDirs:
task androidSourcesJar(type: Jar) {
classifier = 'sources'
from from android.sourceSets.main.java.srcDirs//look at this line
}
Then you will see classes with full source without /* compiled code */
Related
I can't guess how can I specify class, which is entry-point of my program (therefore shouldn't be obfuscated), and my jar archive. Please show me an command-line example, how to use JBCO when I have /home/example/myJar.jar and within it com.example.EntryPoint class and my external dependency /home/example/dependencies/dependencyJar.jar.
Also, please, does anybody know if this project is still alive and what jdk it supports?
A lot of time have passed, but recently I have passed across the java transformation frameworks and find out that JBCO now is a part of soot framework, hosted on GitHub, but it is #deprecated as for now. There is a wiki where you can get more info about how to use soot/jbco (if you still want to, on your own risk, even though JBCO is deprecated and not under active development it still from time to time accepts PRs from contributors).
As for the command line options it might be:
java -cp .:/home/example/sootclasses-trunk-jar-with-dependencies.jar soot.jbco.Main -process-dir /home/example/compiled -output-dir /home/example/obfuscated -soot-class-path .:/home/example/myJar.jar -output-format class -app -main-class com.example.EntryPoint -t:9:wjtp.jbco_cr
Soot can process your compiled code as class files (then pass it to -process-dir option) or as jar (then pass it as part of soot-class-path) - soot can process many forms of bytecode (java/scala/.. bytecode, android bytecode, jasmin, jimple). There are also options to specify what is library classes and application or argument classes more precisely, for more info please refer to soot's wiki page.
I am working with ForgeGradle (Minecraft Forge modding platform).
I'd like to obfuscate my mod before publishing but the nature of Forge platform won't allow me to do it by simply running program like ProGuard after compilation (with defined libraries).
Why?
The structure goes like this:
Mod -> Forge -> Minecraft
Since Minecraft uses its own obfuscated classes and ForgeGradle compilator is not DIRECTLY obfuscating Mod's code to fit with Minecraft's one, it is not possible to use MC.jar as library while using ProGuard. Compiled Forge Mod is actually decoded by Forge in runtime using SRG names. The logic behind this is not easily explainable so I'll just note: I cannot obfuscate .jar in a way to fit with libraries.
So I though - I could just take my mod's code (.java files) and rename all fields/methods/classes that are MINE before Forge compilation.
Is there a software that would allow me to pick number of .java files and "obfuscate" them in a way to not reaname references that don't belong to them?
EDIT (more explanation):
Mod's code has 3 states: Development, Compiled, Running.
I will try to give an example:
Let's say there is a decompiled method ItemSword.onHit() inside Minecraft.jar
And its compiled (obfuscated) version look like this: bca.aa(), also all packages are lost (flattened).
In mod's development state of code (.java) to make reference to it we simply make: ItemSword.onHit()
When we compile mod the call will look like this (.class): ItemSword.func_ab4234() - this is the SRG I was talking about.
Now when the mod will be loaded to game, forge will translate "ItemSword" to "bca" and "func_ab4234()" to "aa()"
Because of this I can't even add proper library - there IS NONE. I will always get (in ProGuard) NoClassDefFound Warning and I can't ignore it (it will crash compilation).
So after this edit - Is it still possible to make obfuscation with ProGuard (considering I cannot have "good" library assigned)?
Did you try the proguard options?
http://proguard.sourceforge.net/manual/usage.html
e.g. for Serializable classes and other stuff put this to your proguard configuration (you also can preserve complete classes if you like):
<!-- With this code serializable classes will be backward compatible -->
<keepnames implements="java.io.Serializable"/>
<!-- or for native access:-->
<keepclasseswithmembernames>
<method access="native"/>
</keepclasseswithmembernames>
<!--Preserve all public classes, and their public and protected fields and methods.-->
<keep access="public">
<field access="public protected"/>
<method access="public protected"/>
</keep>
If I got your question, you want to obfuscate your own code, not anything beyond that. That's what ProGuard is actually quite good at. Let's assume you created your classes in the packages com.foo and com.bar. You can use this simple ProGuard command to only obfuscate your own classes:
-keep class !com.foo.**,!com.bar.** { *; }
It tells ProGuard to not obfuscate any members of classes which do not belong to either com.foo or com.bar.
If you are getting NoClassDef errors, you added the wrong library. I guess you are using some kind of IDE (perhaps eclipse). Have a look at the libraries your project references to find the correct library classpath (e.g. a jar file). You basically need to find the classpath used for compiling your code, ProGuard will take that as well and everything should work.
The Gradle User Guide often mentions that Gradle is declarative and uses build-by-convention. What does this mean?
From what I understand it means that, for example, in java plugin there are conventions like
source must be in src/main/java,tests must be in src/main/test, resources in src/main/resources, ready jars in build/libs and so on. However, Gradle does not oblige you to use these conventions and you can change them if you want.
But with the first concept, I have a bigger problem with understanding. Like SQL you say what you want to do with your queries but do not say how the Database System will get them, which algorithm to use to extract the data etc.
Please, tell me more to understand these concepts properly. Thanks.
Your understanding of build by convention is correct, so I don't have to add anything there. (Also see Jeff's answer.)
The idea behind declarative is that you don't have to work on the task level, implementing/declaring/configuring all tasks and their dependencies yourself, but can work on a higher, more declarative level. You just say "this is a Java project" (apply plugin: "java"), "here is my binary repository" (repositories { ... }), "here are my sources" (sourceSets { ... }), "these are my dependencies" (dependencies { ... }). Based on this declarative information, Gradle will then figure out which tasks are required, what their dependencies are, and how they need to be configured.
In order to understand a declarative style of programming it is useful to compare and contrast it against an imperative programming style.
Declarative Programming allows us to specify what we want to get done.
In Imperative Programming we specify how we get something done.
So when we use gradle,as Peter describes, we make declarations, declaration such as, "This is a Java Project" or "This is a Java Web Application"
Gradle then, makes use of plugins that offer the service of handling the building of things like "Java Projects" or "Web Applications". This is nice because it is the Gradle Plugin that contains the implementation details that concerns itself with such tasks as compiling java classes and building war files.
Contrast this against another build system, Make, which is more imperative in nature. Lets take a look at a simple Make rule from taken from here:
foo.o : foo.c defs.h
cc -c -g foo.c
So here, we see a rule that describes how to build an object file foo.o from a C source file and a C header file.
The Make rule does two things.
The first line says that a foo.o file depends on a foo.c and foo.h. This line is kind of declarative in so far as Make knows how to check the timestamp on the file foo.o to see if it is older than the files foo.c and foo.h. and if foo.o is older then Make will invoke the command that follows on the next line.
The next line is the imperative one.
The second line specifies exactly what command to run (cc - a C compiler) when a foo.o file is older than either of the files foo.c or foo.h. Note also that the person who is writing the Makefile rule must know what flags that are passed to the cc command.
Build by convention is the idea that if you follow the default conventions, then your builds will be much simpler. So while you can change the source directory, you don't need to explicitly specify the source directory. Gradle comes with logical defaults. This is also called convention over configuration.
This part edited to be more clear about declarative nature based on Peter's answer:
The idea of the build being declarative is that you don't need to specify every step that needs to be done. You don't say "do step 1, do step 2, etc". You define the plugins (or tasks) that need to be applied and gradle then builds a task execution graph and figures out what order to execute things in.
So, long story short, I need to use another Java compiler than what came with my Eclipse installation(Windows). I have to run some code that runs well in my other team member's computers (osx) but fails to run here. It seems the compiler I am using is way more strict than theirs, so I am looking for a more relaxed compiler (until they fix their code to comply to my actual compiler).
What are the options available?
So, a totally stripped down version of the code is like this:
public class TreeSet <E extends Xpto & IOrderable<E>> implements SortedSet<E>, Cloneable {
...
}
public interface Xpto {}
interface IOrderable<E> extends Cloneable{
boolean greaterEq(E e);
IOrderable<E> clone();
}
being the error
"The inherited method Object.clone()
cannot hide the public abstract method
in IOrderable"
You have these options
Sun/Oracle (recommended)
IBM Jikes
gjc
But your main description sounds more like build specific problem. You can tweak them by right click on the project->Properties->Java Compiler.
UPDATE Clonable already provides a clone Method which is hidden. So you should strip that line from the IOrderable interface. In TreeSet clone has to be public.
Eclipse uses its own built-in one. You should probably try using the one which comes with the JDK.
Alternatively, have you tried changing the Eclipse compiler options, there's a lot you can tweak, including whether some code ends up with errors, warnings, or nothing. Look in either the project preferences or your workspace preferences, under Java > Compiler > Errors/Warnings. If you could give an example of the errors you're getting (and ideally the code which is failing), we could give more advice.
You should use an Ant build script, which when executed will in turn use the normal Sun Java compiler. See here for a simple build script. It's a good way of getting around the problems :)
Eclipse probably uses the one in the JDK, right? (wrong. from the comments: according to 1 commenter and 3 upvoters, Eclipse uses its own internal compiler, my bad. But that means you can use the one in the JDK too :D)
Anyway, you can try http://en.wikipedia.org/wiki/GCJ
Comments suggest this is not a compiler, although I do not agree. Please educate me on my wrongness and I'll gladly update or remove this answer.
From the wikipedia page :
The GNU Compiler for Java (GCJ or gcj)
is a free software compiler for the
Java programming language and a part
of the GNU Compiler Collection. GCJ
can compile Java source code to either
Java Virtual Machine bytecode, or
directly to machine code for any of a
number of CPU architectures. It can
also compile class files containing
bytecode or entire JARs containing
such files into machine code.
I'm a longtime C++ programmer, new to Java. I'm developing a Java Blackberry project in Eclipse. Question - is there a way to introduce different configuration sets within the project and then compile slightly different code based on those?
In Visual Studio, we have project configurations and #ifdef; I know there's no #ifdef in Java, but maybe something on file level?
You can set up 'final' fields and ifs to get the compiler to optimize the compiled byte-codes.
...
public static final boolean myFinalVar=false;
...
if (myFinalVar) {
do something ....
....
}
If 'myFinalVar' is false when the code is compiled the 'do something....' bit will be missed out of the compiled class. If you have more than one condition - this can be tidied up a bit: shift them all to another class (say 'Config.myFinalVar') and then the conditions can all be kept in one neat place.
This mechanism is described in 'Hardcore Java'.
[Actually I think this is the same mechanism as the "poor man's ifdef" posted earlier.]
you can manage different classpath, for example, implement each 'Action' in a set of distinct directories:
dir1/Main.java
dir2/Action.java
dir3/Action.java
then use a different classpath for each version
javac -sourcepath dir1 -cp dir2 dir1/Main.java
or
javac -sourcepath dir1 -cp dir3 dir1/Main.java
In JDK6, you can do it by using Java's ServiceLoader interface.
Check it here.
If you want this specifically for BlackBerry, the BlackBerry JDE has a pre-processor:
You
can enable preprocessing for your
applications by updating the Eclipseâ„¢
configuration file.
In C:\Program Files\Eclipse\configuration\config.ini,
add the following line:
osgi.framework.extensions=net.rim.eide.preprocessing.hook
If you enable preprocessing after you
have had a build, you must clean the
project from the Project menu before
you build the project again.
Then you can do things in the code like:
//#ifdef SOMETHING
// do something here
//#else
// do something else
//#endif
For details see Specifying preprocessor defines
Can one call that a poor mans ifdef: http://www.javapractices.com/topic/TopicAction.do?Id=64?
No, Java doesn't have an exact match for that functionality. You could use aspects, or use an IOC container to inject different implementation classes.
You can integrate m4 into your build process to effectively strap an analogue to the C preprocessor in front of the Java compiler. Much hand-waving lies in the "integrate" step, but m4 is the right technology for the text processing job.
Besides Maven, Ant and other build tools that provide similar functionality, one would rather build interfaces in Java and switch the implementations at Runtime.
See the Strategy Pattern for more details
In opposite to C/C++ this will not come with a big performance penality, as Javas JIT-compiler optimizes at runtime and is able to inline this patterns in most cases.
The big pro of this pattern is the flexibility - you can change the underlying Implementation without touching the core classes.
You should also check IoC and the Observer Pattern for more details.
You could use maven's resource filtering in combination mit public static final fields, which will be indeed get compiled conditionally.
private static final int MODE = ${mode};
...
if (MODE == ANDROID) {
//android specific code here
} else {
}
Now you need to add a property to your maven pom called "mode", which should be
of the same value as your ANDROID constant.
The java compiler should (!) remove the if and the else block, thus leaving your android code.
Not testet, so there is no guarantee and i would prefer configuration instead of conditional compilation.
There are a couple of projects that bring support for comment-based conditional compilation to Java:
java-comment-preprocessor
JPSG
Example in JPSG:
/* with Android|Iphone platform */
class AndroidFoo {
void bar() {
/* if Android platform */
doSomething();
/* elif Iphone platform */
doSomethingElse();
/* endif */
}
}
In eclipse you could use multiple projects
Main (contains common code)
Version1 (contains version1 code)
Version2 (contains version2 code)
Main -> Select Project->Properties->Java Build Path->Projects tab
Select Add...
Add "Version1" xor "Version2" and OK back to the workspace.
Version1 and Version two contain the same files but different implementations. In Main you normally write e.g.
import org.mycustom.Version;
And if you included Version1/Version2 project as reference it will compile with the Version.java file from Version1/Version2 project.