I have two projects in Git named Zeus and Odin. Both contain some java packages, files, and libraries. One particular package named Olympos is common to both and contains 'almost' same files. Difference is that files within Olympos for Zeus may have methods that interact with DB, but those in Odin will never (though it will contain method with same name but only placeholder code). Please look at following example for further clarification:
Project: Zeus;
Package: Olympos;
File: DummyOne.java;
Method:
public void adapterDB(){
// do something to connect with DB
// do something DB specific- fire queries
}
Project: Odin;
Package: Olympos;
File: DummyOne.java;
Method:
public void adapterDB(){
// do nothing and return null;
}
So the problem- Olympos needs to be in sync at all the times for both the projects, such that when a file change is checked in to Zeus, it automatically syncs with that of Odin or vice-versa. But it has to happen conditionally:
- if new method is being checked in to any file that contains DB related operation (in Zeus), only the method header should sync to Odin, without the actual logic
- if new method is being checked in to any file that does NOT contain DB related operation (in either Zeus or Odin), it should completely match in both the packages.
Obviously, I want to shed off the additional time required to make such changes manually every time and then sync them up separately to these projects since there are like two dozen such changes every week.
Is this something possible using Git? Or there is perhaps more obvious solution outside Git (hope I am not missing elephant in the room).
Related
I'm developing the application, in which JGit is used.
After a pull, I have conflicting files. I can get it from
List<String> list = git.status().call().getConflicting();
The list contains files in conflict.
I know, that I can get conflicting files from
Map<String, int[][]> conflicts = git.pull().call().getMergeResult().getConflicts();
but it doesn't work if I restart my application. After the restart, I will have an empty map because I'm not able to redo the pull when the repository is in merging state.
How can I get conflicting lines by the name of file via JGit API?
You could try to use a ResolveMerger to re-run the merge like so:
ThreeWayMerger merger = StrategyResolve.newMerger(repository, true);
merger.merge(headCommit, fetchedCommit);
Note that the MergeCommand that is called during pull may use a different merge strategy. See MergeCommand ~ line 337 for details. However, make sure to create an in-core merger (the second argument must be true).
With merger.getMergeResults() you should be able to get the conflicting lines.
The whole approach, however, may fail because your work directory is already dirty with conflict markers (<<<<<<<<). Depending on your overall goal, I suggest reconsidering your approach to pull.
If you fetch changes from the upstream repository (without merging immediately) you can dry-run the merge as outlined above as often as necessary. The FetchResult returned by FetchCommand::call() contains information about the commit(s) that were fetched.
I'm working on a project which, in part, displays all the files in a directory in a JTable, including sub-directories. Users can double-click the sub-directories to update the table with that new directory's content. However, I've run into a problem.
My lists of files are generated with file.listFiles(), which pulls up everything: hidden files, locked files, OS files, the whole kit and caboodle, and I don't have access to all of them. For example, I don't have permission to read/write in "C:\Users\user\Cookies\" or "C:\ProgramData\ApplicationData\". That's ok though, this isn't a question about getting access to these. Instead, I don't want the program to display a directory it can't open. However, the directories I don't have access to and the directories I do are behaving almost exactly the same, which is making it very difficult to filter them out.
The only difference in behavior I've found is if I call listFiles() on a locked directory, it returns null.
Here's the block of code I'm using as a filter:
for(File file : folder.listFiles())
if(!(file.isDirectory() && file.listFiles() == null))
strings.add(file.getName());
Where 'folder' is the directory I'm looking inside and 'strings' is a list of names of the files in that directory. The idea is a file only gets loaded into the list if it's a file or directory I'm allowed to edit. The filtering aspect works, but there are some directories which contain hundreds of sub-directories, each of which contains hundreds more files, and since listFiles() is O(n), this isn't a feasible solution (list() isn't any better either).
However,
file.isHidden() returns false
canWrite()/canRead()/canExecute() return true
getPath() returns the same as getAbsolutePath() and getCanonicalPath()
createNewFile() returns false for everything, even directories I know are ok. Plus, that's a solution I'd really like to avoid even if that worked.
Is there some method or implementation I just don't know to help me see if this directory is accessible without needing to parse through all of its contents?
(I'm running Windows 7 Professional and I'm using Eclipse Mars 4.5.2, and all instances of File are java.io.File).
The problem you have is that you are dealing with File. By all accounts, in 2016, and, in fact, since 2011 (when Java 7 came out), it has been superseded by JSR 203.
Now, what is JSR 203? It is a totally new API to deal with anything file systems and file system objects; and it extend the definition of a "file system" to include what you find on your local machine (the so called "default filesystem" by the JDK) and other file systems which you may use.
Sample page on how to use it: here
Among the many advantages of this API is that it grants access to metadata which you could not access before; for instance, you specifically mention the case, in a comment, that you want to know which files Windows considers as "system files".
This is how you can do it:
// get the path
final Path path = Paths.get(...);
// get the attributes
final DosAttributes attrs = Files.readAttributes(path, DosFileAttributes.class);
// Is this file a "system file"?
final boolean isSystem = attrs.isSystem();
Now, what is Paths.get()? As mentioned previously, the API gives you access to more than one filesystem at a time; a class called FileSystems gives access to all file systems visible by the JDK (including creating new filesystems), and the default file system, which always exists, is given by FileSystems.getDefault().
A FileSystem instance also gives you access to a Path using FileSystem#getPath.
Combine this and you get that those two are equivalent:
Paths.get(a, b, ...)
FileSystems.getDefault().getPath(a, b, ...)
About exceptions: File handles them very poorly. Just two examples:
File#createNewFile will return false if the file cannot be created;
File#listFiles will return null if the contents of the directory pointed by the File object cannot be read for whatever reason.
JSR 203 has none of these drawbacks, and does even more. Let us take the two equivalent methods:
File#createNewFile becomes Files#createFile;
File#listFiles becomes either of Files#newDirectoryStream (or derivatives; see javadoc) or (since Java 8) Files#list.
These methods, and others, have a fundamental difference in behaviour: in the event of a failure, they will throw an exception.
And what is more, you can differentiate what exception this is:
if it is a FileSystemException or derivative, the error is at the filesystem level (for instance, "access denied" is an AccessDeniedException);
if is is an IOException, then the problem is more fundamental.
This answer cannot contain each and every use case of JSR 203; this API is vast, very complete, although not without flaws, but it is infinitely better than what File has to offer in any case.
I faced the very same problem with paths like C://users/myuser/cookies.
I already used JSR203, so the above answer kind of didn't help me.
In my case the important attribute of those files was the hidden one.
I ended up using the FileSystemview, which excluded those files as I wanted.
File[] files = FileSystemView.getFileSystemView().getFiles(new File(strHomeDirectory), !showHidden);
I'm writing an application in java that will facilitate creating wireless sensor networks using off the shelf micro controllers, sensors, and radios. Each sensor and radio will most likely require unique code. I'm planning on creating skeletons for each platform and and then having modular bits of code for each sensor and radio that can be plugged into these skeletons. This will result in a library of static information that will be used to dynamically generate code for these sensors.
I'm not sure what the best way to store and organize this data would be. I started off trying to create classes for each sensor encapsulating its unique properties but using objects for data storage only seems weird. I feel like SQL would be overkill as the data isn't really changing and I would also like to keep everything in version control. Should I just use flat files? XML? Any advice on how to architect this project would be very welcome.
Instead of generating source, I'd go binary. Conceptually, that is.
Why would the source code need to change if a device is plugged in or out? Simply compile binary device driver libraries and link them to the main app.
There is an assembler, so likely there is a linker.
If there is no linker, and you are forced to use a monolithic source file, then at least we can use the concepts of a linker.
Linking Source Code
For inspiration and details I'd look into Operating System Design a little bit, for the concepts of device drivers, and IO devices, and network sockets. I'd use this to take a hard look at the source that would be generated, and what exactly changes if a device is changed, and fix it so that as little as possible, ideally nothing, has to be changed.
The code for the app running on the (presumably embedded) system should be maintained separate from the device drivers, so here is where the abstraction needs to begin. It needs to be refactored to abstract away the particulars of the devices into abstract device classes.
So this is the first step: refactor the generated source to abstract out the particulars of the device drivers so that you have a main
application that calls functions via symbols.
This allows the main app to work regardless of the number and kind of devices available.
Next, I'd look into compiler theory, particularly the concepts of symbol resolution and static/dynamic linking, and stub. Since the generated source is refactored so that there is a main application and a list of device drivers, all that is left is to make the devices available to the application.
Illustration
The application could generate the source code to be assembled by concatenating the source for the main application with the source for the device drivers.
It would provide a stub as well: a small library providing a function to iterate the devices and interrogate their classes.
Your application then becomes so simple that a one-liner on a *NIX prompt could do it. No Java required:
cat program stub drivers/foo drivers/bar > generated-source-or-binary
In it's simplest form, the program would contain a call to an iterate_devices label in stub.
Here's a layout of the source and/or binary image:
// application
main() {
for ( device in list_devices() ) {
switch ( device.class ) {
....
}
}
}
// stub
list_devices() {
for ( device = first; device != null; device += *cur )
yield device;
}
first: // drivers follow
// drivers/foo
dev_foo: .long dev_foo_end - . // size
....
dev_foo_end
// drivers/bar
dev_bar: .long dev_bar_end - .
....
dev_bar_end
Organizing Driver Sources
This shouldn't have to be more complicated than a directory with files.
A simple approach would be to include these in the .jar in a specific package. For instance, having a class provide driver sources like this:
package myapp.drivers;
public class DriverSource {
public static InputStream getDriverSource( String identifier ) {
return this.getClass().getClassLoader().getResourceAsStream(
this.getClass().getPackage().getName().replace('.', '/')
+ '/' + identifier + '.source'
);
}
}
would require the driver sources to be put in myapp/drivers/{identifier}.source. In a standard eclipse project, you'd place the files in src/myapp/drivers/. Using Maven, you'd put them in src/main/resources/myapp/drivers/. You can also put them in another directory, as long as they are copied as resources to the proper package directory.
The above class could also serve as a basis for more complex storage: you could query a remote service and download the source files, or query an SQL database. But resource files will be a decent start.
I have a List of classes which I can iterate through. Using Java is there a way of finding out where these classes are used so that I can write it out to a report?
I know that I can find out using 'References' in Eclipse but there are too many to be able to do this manually. So I need to be able to do this programmatically. Can anyone give me any pointers please?
Edit:
This is static analysis and part of creating a bigger traceability report for non-technical people. I have comprehensive Javadocs but they are not 'friendly' and also work in the opposite direction to how I need the report. Javadocs start from package and work downwards, whereas I need to start a variable level and work upwards. If that makes any sense.
You could try to add a stacktrace dump somewhere in the class that isolates the specific case you are looking for.
public void someMethodInMyClass()
{
if (conditions_are_met_to_identify)
{
Thread.dumpStack();
}
// ... original code here
}
You may have to scan all the sources, and check the import statements. (Taking care of the * imports.. having to setup your scanner for both the fully Qualified class name and its packagename.*)
EDIT: It would be great to use the eclipse search engine for this. Perhaps here is the answer
Still another approach (probably not complete):
Search Google for 'java recursively list directories and files' and get source code that will recursively list all the *.java file path/names in a project.
For each file in the list:
1: See if the file path/name is in the list of fully qualified file names you are interested in. If so, record is path/name as a match.
2: Regardless if its a match or not, open the file and copy its content to a List collection. Iterate through the content list and see if the class name is present. If found, determine its path by seeing if its in the same package as the current file you are examining. If so, you have a match. If not, you need to extract the paths from the *.import statements, add it to the class name, and see if it exists in your recursive list of file path/names. If still not found, add it to a 'not found' list (including what line number it was found on) so you can manually see why it was not identified.
3: Add all matches to a 'found match' list. Examine the list to ensure it looks correct.
Not sure what you are trying to do, but in case you want to analyse code during runtime, I would use an out-of-the box profiler that shows you what is loaded and what allocated.
#Open source profilers: Open Source Java Profilers
On the other hand, if you want to do this yourself (During runtime) you can write your own custom profiler:
How to write a profiler?
You might also find this one useful (Although not exactly what you want):
How can I list all classes loaded in a specific class loader
http://docs.oracle.com/javase/7/docs/api/java/lang/instrument/Instrumentation.html
If what you are looking is just to examine your code base, there are really good tools out there as well.
#see http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
I have to develop a small app to compare automatically generated folders. It must compare the folders, sub-folders and file contents. The problem is that this app needs to be launched either from a user on his computer to manually check for changes, or automatically along with the ANT nightlies. In the first case the results are displayed as a table in the Swing GUI. But in the other case, it must create a file to put the results in (format doesn't matter, XML, CSV, ...).
Anyone got some tips, or a link to a tutorial ?
You might want to add some command line option that switches between ui and file export, e.g. --gui or --export=[filename]. You could use Apache Commons CLI for that.
The other method is to create a set of classes that performs the task, and returns a set of values, which can then be either written to disk, or displayed in a GUI. I.e., an engine, and two front-ends (the GUI and the CLI).
for example:
public interface DirectoryComparer {
CompareResult performCompare(Directory dir1, Directory dir2);
public static interface CompareResult {
//...things here that you need, such as, file or dir difference, etc
Iterable<File> getFileDiff();
Iterable<Directory> getDirectoryDiff();
}
}
then, the GUI clients will just use DirectoryComparer to display the results, and the CLI client will write these results to a file or three. But those two clients are completely separate, and can be maintained separately.