My application has 4 Java packages.
com.me.utilities
com.me.widget
com.me.analysis
com.me.interface
All packages depend upon the utilities. The widget package depends upon the interface package.
The utilities might be valuable to other applications so it ought to be a package of its own. The analysis does not depend upon the widget and the interface so analysis ought to be a package of its own. The interface might change because the organization that it interfaces to might go out of business so the interface ought to be a package of its own.
This is just one application that produces one executable.
On the basis of this organization I do commits on each package but not on the executable. I want to start to commit the executable. One way would be to commit the executable in a new git archive without any connection to the source but that sounds reckless to have an executable and no formal way to tie it to source code.
Another way, which sounds a little inefficient, would be to simply start a new git archive that "adds" the source code of all 4 Java packages, each of which has many Java files, and also "add" the executable. This seems a little strange because it fails to respect the 4 existing git archives that already know about their respective collections of source code.
What is the right way to tie these 4 packages together with their common executable?
I use SmartGit for routine commits and I use command line git for reverting. I am willing to stop using SmartGit if the solution to this inquiry necessitates it.
It looks like what you're looking for is an artifact repository, like Nexus, Artifactory, JCenter, etc.
That's where you typically publish the artifacts produced by a build every time you do a release.
That's also what allows build tools and IDEs to get artifacts for the libraries a project uses. So if you end up turning your utilities package into a separate library, used by several different projects, you'll need to publish it in such an artifact repository. Both Gradle and Maven get their artifacts, but also allow publishing artifacts, to such artifact repositories.
Related
I'm just getting started with CodeQL and have had plenty of success scanning Python projects. Now, I'm starting to scan Java projects, and I struggle to scan precompiled projects.
From what I gathered, it appears CodeQL CLI includes an autobuilder for Java code and will build the projects for me. I'm trying to scan projects already compiled from the Maven central repository.
Question:
Is it possible to scan compiled Java source code (i.e., bytecode, class files) contained within a JAR file with CodeQL?
If so, how can I invoke these properties to scan JAR files from the CLI?
Thanks for any insight!
As mentioned in the other answer, for Java CodeQL observes the results during compilation and creates a database from it. It is therefore not possible to build a database from a JAR containing compiled classes. It is however possible to use compiled classes in a project (e.g. in the form of Maven dependencies, or JDK usage), and CodeQL will record the information that these classes are used, but it has no insight into what these classes do. That means no dataflow or taintflow will be available for them, unless CodeQL explicitly models it, see the list of supported frameworks.
However, since your plan is to run queries against projects from Maven Central, it is most likely easiest to obtain the databases from lgtm.com, or to directly use the Query Console on lgtm.com, see also the documentation. For most projects lgtm.com is able to build the project on its own.
lgtm.com is owned by Semmle, which originally created CodeQL and was acquired by GitHub.
From what I read, it does not seem to work on compiled classes. You will need the src code, whether that exists as a (Jar, which then you need to unzip before processing), or a Github project.
Usually during running you would provide the way to build your project, such as --language=java --command='mvn clean install -DskipTests' <-- This requires source code.
My team uses a GitHub.com organization to keep all of our source code in private repos. (Prior, our old workflow was emailing Dropbox links). Most of the time each repo is one separate project with no dependancy of any other (the only dependancies are on third-party open source libraries). Or if there is some dependancy, then the .java files have just been copy pasted into the other project.
I've recently been splitting up some of my code into reusable modules, but I don't know any way to do the dependancy management when I use the libraries I'm creating in another project.
I know with Gradle you can add a git repo like this:
gitRepository('https://github.com/user/project.git') {
producesModule('user:project')
}
but I don't know if there's a way to make it work with private repos, and I don't know if there's a way to specify versions.
My currently solution is to just build the library JAR, and keep track of the binary version with GitHub release tagging, and when I need to use the library in another project, I download the desired version of the JAR (typically the most recent) and add it to a local /lib/ folder in the other project and import the JAR into the module path as a local JAR. Of course I need to go through the whole process again manually if I want to make a change to the library.
I also heard you can set up private Gradle or Maven servers and some companies do that, but I guess that would mean migrating away from GitHub.com?
Is there any way to make this work (either Gradle or Maven, it doesn't matter) to manage dependancies between GitHub private repos?
Can someone tell me, what is the most sensible way (or ways) to solve this?
Thanks.
What you need is a very typical maven/gradle based setup where
each of your projects will be producing an artifact with a coordinate
of the form group:name:version
your projects do not have to be explicitly aware of each other. They depend on the artifacts produced by other projects. This is called binary dependency
for a project to locate a binary dependency, you will need a central registry where you can publish all your artifacts to. GitHub has a product called GitHub Package for precisely this purpose.
If you don't want to use GitHub Package yet, or your setup (number of projects, size of each projects, size your team) is small enough, you can locally checkout all the projects and include them into a gradle composite build so that binary dependencies will be substituted with local project dependencies. The good thing about the composite build is that when you decide to invest in a package registry, your build.gradle requires no change at all.
BTW, where you run your private package registry does not really matter. You can use the GitHub Package, or some other hosted services, or even run e.g. jfrog artifactory on your own server. It is completely unrelated to where you host your source code, so you dont need to migrate away from GitHub in any case.
I currently manage a few separate Maven projects in which I use Protobufs as a serialization format and over the wire. I am using David Trott's maven-protoc plugin to generate the code at compile time.
All is good and well until I want those project to communicate between one another — or rather, use each other's protobufs. The protobuf language has an "import" directive which does what I want but I'm faced with the challenge of having project A exporting a ".proto" file (or possibly some intermediate format?) for project B to depend upon.
Maven provides a way for a project to bundle resources but AFAIK, these are meant to be used at runtime by the code and not by a goal during the compile / source generation phase — at least I haven't been able to find documentation that describes what I want to achieve.
I've found another way to achieve, and it doesn't involve any Maven magic. Diving into the code for the maven-protoc plugin, I found that this is a supported use case -- the plugin will look for and collect and .proto files in dependent jars and unpack them into a temporary directory. That directory is then set as an import path to the protoc invocation.
All that needs to happen is for the .proto file to be included in the dependency's package, which I did by making it a resource:
projects/a/src/main/resources/a.proto
Now in projects/b/pom.xml, add 'a' as a regular Maven dependency and just import a.proto from b.proto as if it existed locally:
b.proto:
import "a.proto";
This isn't ideal, since files names may clash between various projects, but this should occur rarely enough.
You can package your .proto files in a separate .jar/.zip in the project where they are generated, and publish them in your repository using a dedicated classifier. Using the assembly plugin might help here to publish something close to "source jars" that are built during releases.
Then, on projects using them, add previously created artifact as dependency.
Use the dependency plugin with the "unpack-dependencies" goal, and bind it to a phase before "compile".
I developing a web application with a lot of libraries like, Spring, Apache CXF, Hibernate, Apache Axis, Apache Common and so one. Each of these framework comes with a lot of *.jar libraries.
For development I simple take all of the delivered libraries and add them to my classpath.
For deployment not all of these libraries are required, so is there a quick way to examine all the required libraries (*.jar) which are used by my source code?
If you move your project to use Maven such things become easier:
mvn dependency:analyze
mvn dependency:tree
For your example, Maven + IDE + nice dependency diagrams could help allot.
See an example of this : it's much easier this way to figure out what happens in a project, and this way you don't need to add to your project "all delivered libraries" - just what it's required.
JDepend traverses Java class file
directories and generates design
quality metrics for each Java package.
JDepend allows you to automatically
measure the quality of a design in
terms of its extensibility,
reusability, and maintainability to
manage package dependencies
effectively.
So, as a quick, dirty, and potentially inefficient way, you can try this in Eclipse:
Create two copies of your project.
In project copy #2 remove all the jars from the classpath.
Pick a source file that now has errors because it can't resolve a class reference. Pick one of the unresolved classes and note its fully qualified class name.
Do Control-Shift-T and locate the unresolved class. You should be able to see which jar its contained in since all the jars are still in the classpath for project copy #1.
Add the jar that contains this unresolved class back into your classpath in project copy #2, then repeat steps 3 and 4 until all class references are resolved.
Unfortunately you're not done yet since the jar files themselves may also have dependencies. Two ways to deal with this:
Go read the documentation for all the third-party packages you're using. Each package should tell you what its dependencies are.
Run your application and see if you get any ClassNotFoundExceptions. If you do, then use Control-Shift-T to figure out what jar that class comes from and add it to your classpath. Repeat until your project runs without throwing any ClassNotFoundExceptions.
The problem with #2 is that you don't really know you've resolved all the dependencies since you can't simulate every possible execution path your project might take.
We have several products which have a lot of shared code and which must be maintained several versions back.
To handle this we use a lot of Eclipse projects, some contain library jars, and some contain shared source code (in several projects to avoid getting a giant heap with numerous dependencies while being able to compile everything from scratch to ensure that source and binaries are consistent). We manage those with projectSet.psf's as these can directly pull all projects out from CVS and leave a fully prepared workspace. We do not do ant builds directly or use maven.
We now want to be able to put all these projects and their various versions in a Continous Integration tool - I like Hudson but this is just a matter of taste - which essentially means that we need to get an automatic way to check out the projects to a fresh workspace, and compile the source folders as described in the project-files in each project. Hudson does not provide such an approach to build a project, so I have been considering what the best way to approach this would be.
Ideas have been
Find or write an ant plugin/converter that understands projectSet.psf's and map to cvs-checkout and compile tags.
Create the build.xml files from within Eclipse and use those. I tried this, and found the result to be verbose and with absolute locations which is not good with automatic tools putting files where they want to.
Write a Hudson plugin which understands projectSet.psf's to derive a configuration and build it.
Just bite the bullet and manually create and update the CI configuration whenever stuff breaks - I don't like this :)
I'd really like to hear about other peoples experiences so I can decide how to approach this.
Edit: Another option might be using a CI which knows better about Eclipse projects and/or project sets. We are not religious - this is just a matter of getting stuff running without having to do everything ourselves. Would Cruise Control be a better option perhaps? Others?
Edit: Found that ant4eclipse has a "Team Project Set" facility. http://ant4eclipse.sourceforge.net/
Edit: Used the ant4eclipse and ant-contrib ant extensions to build a complete workspace as a sjgned runnable jar file similar to the Runnable Jar facility in Eclipse 3.5M6. I am still depending on Eclipse to create the initial empty workspace, and extract the ProjectSet, so that is the next hurdle.
Edit: Ended up with a dual configuration, namely that Hudson extracts the same set of modules as listed in the ProjectSet.pdf file from CVS (which needs to have the same tag) causing them to be located next to each other. Then ant4eclipse works well with the projectSet.psf file embedded in the main module. Caveat: Module list in Hudson must be manually updated, and it appears that a manual workspace cleanup is needed afterwards to let Hudson "discover" that there is more projects now than earlier. This has now worked well for us for a couple of months, but it was quite tedious to get everything working inside the ant file.
Edit: The "Use Team Projects" with ant4eclipse and a Ctrl-A, Ctrl-C in Project Panel with a Ctrl-V in the CVS projects in Hudson has turned out to work well enough for us to live with (for mature projects this is very rarely changed). I am awaiting the release of ant4eclipse 1.0 - http://www.ant4eclipse.org/, currently in milestone 2 - to see how much homegrown functionality can be replaced with ant4eclipse things.
Edit: ant4eclipse is as of 20100609 in M4 so the schedule at http://www.ant4eclipse.org/node?page=1 is slipping somewhat.
Edit: My conclusion after using our ant4eclipse approach for a longer period is that the build script get very gnarly and is hard to maintain. Also the Team ProjectSet facility (which ant4eclipse use to locate the projects) which works well for CVS based repositories, but not after we migrated to git (which is a big thing in itself). New projects will most likely be based on maven, as this has good support in Jenkins.
I'm not completely sure I understand the problem, but it sounds like the root issue is that you have many projects, some of which are dependent on others. Some of the projects that are closer to the "leaf" of the dependency tree need to be able to use "stable" (or previously "released") versions of the more "core" projects.
I solve exactly this problem using Hudson, ant, and ivy. I follow a pattern demonstrated by Clark in Pragmatic Project Automation (he doesn't demonstrate the dependency problems and solutions, and he uses CruiseControl rather than hudson.)
I have a hand-written ant build file (we call it "cc-build.xml", because of our CruiseControl roots.) This file is responsible for refreshing the working space for the project from the CM repository and labeling the contents for future reference. It then hands off control to another hand-written ant build file (build.xml) that is provided by each project's developers. This project is responsible for the traditional build steps (compile, packaging, etc.) It is required to spit out the installable artifacts, unit test reports, etc, to the Hudson artifacts directory. It is my experience that automatically generated build files (by Eclipse or other similar IDE's) will never get close to getting this sufficiently robust for use in a CI scenario.
Additionally, it uses ivy to resolve its own dependencies. Ivy supports precisely-specified dependency versions (e.g. "use version 1.1") and it supports "fuzzy versions" (e.g. "use version 1.1+" or "use the latest version in integration status.") Our projects typically start out specifying a very "fuzzy" version for internal projects under ongoing development, and as they get close to a release point, they "freeze" the dependency version so that stuff stops moving underneath them.
The non-leaf projects (projects that are dependents for other projects) also use ivy to publish their artifacts to our internal ivy repository. That repository keeps all past builds of the dependents, so that any project can always depend on any other previous version.
Lastly, each project in Hudson is configured to have a build trigger that causes a rebuild when any of its dependent projects successfully build. This causes them to get built again with the (possibly) new ivy dependent version.
It is worth noting that once you get this up and running, consistent automated "labeling" or "tagging" of an automated build's inputs is going to be critical for you - otherwise troubleshooting post-build problems is going to result in having to untangle a hornet's nest to find the original source.
Getting all of this setup for our environment took quite a bit of effort (primarily in setting up the ivy repository and ant build files,) but it has paid for itself many times over in saved headaches in manually managing the dependencies and decreased troubleshooting effort.
Write a Hudson plugin which
understands projectSet.psf's to derive
a configuration and build it.
That seems like the winning answer to me.
I work with CruiseControl rather than Hudson but in my experience if you can create a plugin that solves your problem it will quickly payoff. And it is generally pretty easy to write a plugin that is custom fit for your solution as opposed to one that needs to work for everyone in a similar situation.
I have tried both Cruise Control (CC) and Hudson for our CI solution. We (as a company) decided on Hudson. But for your question "Does CC support Eclipse project build" the answer is no as far as I know. CC supports many more different build tools and Source Control systems but it is a bit more difficult to configure and use. As for Hudson, it is more simple to configure and use it. We developed our custom plugins for both CC and Hudson for the parts of our build cycle that they do not provide as is. As for plugin development, if you know / use Maven, Hudson is simpler too. But if you are not familiar to Maven, first you need to learn the basic usage of maven to successfully develop a Hudson plugin. But once you understand the basic usage of maven, plugin development, test and even debug is simpler in Hudson.
For your specific problem, I can think of a solution that makes use of Eclipse plugins as well. You can develop your own Eclipse plugin that for instance gets the psf files from a (configurable) folder, and use Eclipse internals to process these psf's. I mean you can use existing Eclipse source codes that takes a psf file, check-outs it's project definitions and compile these projects. This Eclipse plugin of yours may have a preference page (which you can access by Eclipse -> Window -> Preferences) and configure which folder it will use to look for psf files. Your Eclipse plugin should also have a way to start psf processing without user interaction. For this, you can use ipc to trigger your process. I mean your Eclipse plugin can listen for a port, and you can write another java application that will connect to your plugin through this port and trigger its process. As for CI part, you can use either CC or Hudson and use their external process execution support. If you are using Windows, you can write a bat file (for Linux sh file) that first launchs Eclipse that has your plugin installed. Then it launches your java application that will communicate with your Eclipse plugin to trigger your process. From your CI tool you will need to run your bat / sh file to trigger your process.