In most teams there is a rule that says that the #author and #since keywords have to be used for all documented classes, sometimes even methods.
In an attempt to focus on what matters, I don't use these keywords and instead rely on the fact that I can use the source control management system to determine who the author of a class is and since when it exists.
I believe that #author and #since come from a time where version control was not yet common and I think that they are rather redundant by now. How do you think about this? Should modern Java projects use them?
I think the #author tag actually confuses things. First of all, if it isn't updated judiciously, it becomes wrong. Also, what if you (not being the original author) change half a class? Do you update the #author? Do you add one? And what if you change only a few lines in the class?
I also think it promotes code-ownership, which I don't think is a good thing. Anybody should be allowed to change a file. If there's an #author tag, people will tend to let this author make all the changes instead of doing it themselves and maybe learning something in the process.
Finally, as you said, the same information, in much more detail, is available from your VCS. Anything you add in Javadoc is duplication. And duplication is bad, right?
EDIT: Other answers mention the fact that you may not have access to the VCS, and in such cases the #author tag is useful. I humbly disagree. In such cases, you're most likely dealing with a 3rd party library, or perhaps an artifact from a different team inside your company. If that's the case, does it really matter who the individual was who created a certain class? Most likely, you only need to know the entity who created the library, and talk to their contact person.
Well for one thing Javadoc visibility typically transcends source control visibility. I can view the Javadocs for Java 1.1's library but can't to my knowledge freely peruse the Sun's version history from back then.
You're talking as if your Javadocs are completely isolated to you (the developer), and not distributed to others as part of an API, etc. That's not always the case. Usually Javadocs and VCS information serve completely different purposes.
For me, even if I have free access to the version history of a file, I like being able to see it right there in the source, for the same reason that I like comments explaining odd code in the file instead of having to go to the commit description for a certain code block. It's quicker.
I know that we have used them, and they are really nice when simply perusing the source code. I have had more than one situation where the #since has been really handy to have in there, as it would have taken a bit of work to determine what version something was added in (by comparing dates, etc).
Just my experience however. I think the #author has been less useful, but since we can autogenerate both pieces of data when creating new classes, it doesn't seem like a waste to just let the system do it to me.
I think documentation rules should be enforced only if you need them. If it's redundant for you to put those in Java Docs, then don't enforce the rule. A case where it would matter is if anyone ever needs to see that information and doesn't have access to your version control
No. The audience of the javadoc pages may not have access to your source control, so this information is relevant.
The #since is important as the documentation may be consulted for older versions of the software. When you can see when a feature was introduced you know 1) that it is unavailable to you and 2) that there is a good reason for upgrading.
You might, however, use an email address for author to contact your team for the #author tag.
Related
I have stumbled upon Hashing class from com.google.common.hash package.
Intellij IDEA shows following warning if I am using functions of that class:
The class itself is annotated with #Beta annotation:
The description of #Beta annotation says:
Signifies that a public API (public class, method or field) is subject to incompatible changes, or even removal, in a future release. An API bearing this annotation is exempt from any compatibility guarantees made by its containing library. Note that the presence of this annotation implies nothing about the quality or performance of the API ...
So the implementation of the API is fine and stable?
... in question, only the fact that it is not "API-frozen."
It is generally safe for applications to depend on beta APIs, at the cost of some extra work ...
Which kind of extra work?
... during upgrades. However it is generally inadvisable for libraries (which get included on users' CLASSPATHs, outside the library developers' control) to do so.
The question is whether it is safe / stable to use mentioned class and its functionality? What is the tradeoff while using a beta API?
The implementaion of the API is fine, you can rely on that since it is an extensively used library from google.
As for stability - you can do a little research here and compare a couple of versions of this API a year apart. Let's say, 23.0 versus 27.0-jre
https://google.github.io/guava/releases/23.0/api/docs/com/google/common/hash/Hashing.html
https://google.github.io/guava/releases/27.0-jre/api/docs/com/google/common/hash/Hashing.html
If you do a diff, the API's from different years (2017 versus 2018) are exactly the same.
Therefore, I would interpret the #Beta here as a heads-up that "be aware, this API may change in future", but in practise the API is both stable, reliable and heavily used.
Maybe at some point, the google developers may choose to remove the #Beta annotation. Or maybe they intend to, or have forgotten (speculative...)
The "extra work" referred to means that, if you build an application using this API, you may need to refactor your application slightly (imagine that a method signiture changes, or a method becomes deprecated and replaced) if you need to upgrade to the newest version of this API.
The degree of work there depends on how heavily and how often you use the API, and how deep the dependency on that API is (transitively, through other libraries, for example - those would also need to be rebuilt).
In summary, in this case - "dont worry, move along" :)
So the implementation of the API is fine and stable?
No way to know from this annotation.
To answer that you need to know how widely used it is, and for how long.
Which kind of extra work?
The kind of extra work that you have to do when a method that required only 1 parameter and returned String now requires 3 parameters, and returns a List<String>.
i.e.: Code that uses this API might need to change due to API change.
So the implementation of the API is fine and stable?
The quoted text says that the API is "subject to incompatible changes". That means that it (the API) is not stable.
Note also that the quoted text explicitly states that the annotation is saying nothing about whether or not the API's implementation works.
But also note that this is not a yes / no issue. It is actually a yes / no / maybe issue. Some questions don't have answers.
Which kind of extra work?
Rewriting some of your code ... if the API changes.
The question is whether it is safe / stable to use mentioned class and its functionality?
This requires the ability to predict the future. It is unanswerable. (Unless you ask the people who put that annotation on that API. They may be able to make a reliable prediction ...)
It is also unanswerable because it depends on what you mean by safe, and what context you intend to use the Hashing class in.
What is the tradeoff while using a beta API?
The tradeoff is self evident:
On the plus side, you get to use new API functionality that may be beneficial to your application in the long run. (And if there is no evidence that it may be beneficial, this whole discussion is moot!)
On the minus side, you may have to rewrite some of your code if the authors modify the API ... as they say they might do.
While studying the standard Java library and its classes, i couldn't help noticing that some of those classes have methods that, in my opinion, have next to no relevance to those classes' cause.
The methods i'm talking about are, for example, Integer#getInteger, which retrieves a value of some "system property", or System#arraycopy, whose purpose is well-defined by its name.
Still, both of these methods seem kinda out of place, especially the first one, which for some reason binds working with system resources to a primitive type wrapper class.
From my current point of view, such method placement policy looks like a violation of a fundamental OOP design principle: that each class must be dedicated to solving its particular set of problems and not turn itself into a Swiss army knife.
But since i don't think that Java designers are idiots, i assume that there's some logic behind a decision to place those methods right where they are. So i'd be grateful if someone could explain what that logic really is.
Thanks!
Update
A few people have hinted at the fact that Java does have its illogical things that are simply remnants of a turbulent past. I reformulate my question then: why is Java so unwilling to mark its architectural flaws as deprecated, since it's not like that the existing deprecated features are likely to be discontinued in any observable future, and making things deprecated really helps refraining from using them in newly created code?
This is a good thing to wonder about. I know about more recent features (such as generics, lambda's etc) there are several blogs and posts on mailing lists that explain the choices made by the library makers. These are very interesting to read.
In your case I expect the answer isn't too exiting. The reason they were made is hard to tell. But both classes exist since JDK1.0. In those days the quality of programming in general (and also Java and OO in particular) was perhaps lower (meaning there were fewer common practices, library makers had to invent many paradigms themselves). Also there were other constraints in those times, such as Object creation being expensive.
Many of those awkwardly designed methods and classes now have a better alternative. (See Date and the package java.time)
The arraycopy you would expect to be added to the Arrays class, but unfortunately it is not there.
Ideally the original method would be deprecated for a while and then removed. Many libraries follow this strategy. Java however is very conservative about this and only deprecates things that really should not be used (such as Thread.stop(). I don't think a method has ever been removed in Java due to deprecation. This means it is fairly easy to upgrade your software to a newer version of Java, but it comes at the cost of leaving some clutter in the libraries.
The fact that java is so conservative about keeping the new JDK/JRE versions compatible with older source code and binaries is loved and hated. For your hobby project, or a small actively developed project upgrading to a new JVM that removes deprecated functions after a few years is not too difficult. But don't forget that many projects are not actively developed or the developers have a hard time making changes securely, for instance because they lack a proper regression test. In these projects changes in APIs cost a lot of time to comply to, and run the risk of introducing bugs.
Also libraries often try to support older versions of Java as well as newer version, they will have a problem doing so when methods have been deleted.
The Integer-example is probably just a design decision. If you want to implicitly interpret a property as Integer use java.lang.Integer. Otherwise you would have to provide a getter method for each java.lang-Type. Something like:
System.getPropertyAsBoolean(String)
System.getPropertyAsByte(String)
System.getPropertyAsInteger(String)
...
And for each data type, you'd require one additional method for the default:
- System.getPropertyAsBoolean(String, boolean)
- System.getPropertyAsByte(String, byte)
...
Since java.lang-Types already have some cast abilities (Integer.valueOf(String)), I am not too surprised to find a getProperty method here. Convenience in trade for breaking principles a tiny bit.
For the System.arraycopy, I guess it is an operation that depends on the operating system. You probably copy memory from one location to another in a very efficient way. If I would want to copy an array like that, I'd look for it in java.lang.System
"I assume that there's some logic behind a decision to place those
methods right where they are."
While that is often true, I have found that when somethings off, this assumption is typically where you are mislead.
A language is in constant development, from the day someone proposes a new language to the day it is antiquated. In between those extremes are some phases that the language, go through. Especially if someone is spending money on it and wants people to use it, a very peculiar phase often occurs, just before or after the first release:
The "we need this to work yesterday" phase.
This is where stuff like this happens, you have an almost complete language, but the programmers need to do something to to show what the language can do, or a specific application needs a feature that was not designed into the language.
So where do we add this feature?
- well, where it makes most sense to that particular programmer who's task it is to "make it work yesterday".
The logic may be that, this is where the function makes the most sense, since it doesn't belong anywhere else, and it doesn't deserve a class of its own. It could also be something like: so far, we have never done an array copy, without using system.. lets put arraycopy in there, and save everyone an extra include..
in the next generation of the language, people will not move the feature, since some experienced programmers will complain. So the feature may be duplicated, and found in a place where it makes more sense.
much later, it will be marked as deprecated, and deleted, if anyone cares to clean it up..
I refactored a class by moving some of the methods to different class. Since this was like an Architecture refactoring and not code refactoring, I was wondering if it is a good practice to mention in the javadoc of the new classes that it contains the methods moved from previously existing X class?. For example
/**
Processor that sets sequence to the payment group. This processor has been added as part of checkout refactor project and xxxMethod() method has been moved from {#link XXXFormHandler} to this pipeline processor.
**/
I like this approach since it gives clear picture to any developer involved in maintaining this code. My only concern is if this is exposed as an API, then those information will be shown to everyone.
I don’t think that this is useful. You had strong reasons to refactor the code so developers knowing the new code only should not bother with the old architecture. Developers knowing the old code only don’t know where in the new code they have to look for that javadoc and once they know, they don’t need it anymore. So they need a migration guide which exists independently of the javadoc.
Regarding the last concern, if API users shall not see the particular documentation, regular comments do the right thing as maintainer seeing the source code see the comments.
It is possible to access bits of MATLAB's internal java code to programmatically change MATLAB itself. For example, you can programmatically open a document in the editor using
editorServices = com.mathworks.mlservices.MLEditorServices;
editorServices.newDocument() %older versions of MATLAB seem to use new()
You can see the method signatures (but not what they do) using methodsview.
methodsview(com.mathworks.mlservices.MLEditorServices)
I have a few related questions about using these Java methods.
Firstly, is there any documentation on these things (either from the Mathworks or otherwise)?
Secondly, how do you find out what methods are available? The ones I've come across appear to be contained in JAR files in matlabroot\java\jar, but I'm not sure what the best way to inspect a JAR file is.
Thirdly, are there functions for inspecting the classes, other than methodsview?
Finally, are there any really useful methods that anyone has found?
There is no official documentation nor support for these classes. Moreover, these classes and internal methods represent internal implementation that may change without notice in any future Matlab release. This said, you can use my uiinspect and checkClass utilities to investigate the internal methods, properties and static fields. These utilities use Java reflection to do their job, something which is also done by the built-in methodsview function (I believe my utilities are far more powerful, though). In this respect, I believe we are not crossing the line of reverse-engineering which may violate Matlab's license.
If you are looking for documentation, then my UndocumentedMatlab.com website has plenty of relevant resources, and more is added on a regular basis so keep tuned.
I am also working on a book that will present a very detailed overview of all these internal classes, among other undocumented stuff - I hope to have publication news later this year.
I am an eclipse fan. If you use that as your IDE, the jar can be imported into one of your projects and you can inspect the methods in there.
To find out more about java objects, I use uiinspect.
The only place I know that is documenting the Matlab hidden Java stuff is Undocumented Matlab by Yair Altman. His site lists plenty of very useful tricks. Being able to use Java to format text in list boxes has come in very handy for me, for example.
EDIT
The man has spoken. Listen to him, since I don't think there's anyone outside MathWorks who knows more about Matlab's internal java code.
Undocumented Matlab is a great place to start looking.
I am just curious but I want to know if it is feasible to remove totally the Java standard class libraries coming with the JVM and start a new one from the scratch [à la ClassPath].
If that is possible, what classes MUST be implemented as minimum? (Object and String come to my mind, but... I do not know).
Such thing breaks some license? Is there any way to say to the "java" command to "not use the rt.jar"?
Thanks in advance,
Ernesto
You can use the -Xbootclasspath option to specify your own set of core classes.
If you do go down this path, you will probably end up with a lot of problems if you intend to also use third party libraries as they will depend on the core API and any inconsistencies in your version will likely cause bugs.
As an absolute minimum you'd probably have to reimplement everything in the java.lang package. As well as Object and String, the primitive wrapper classes need to be present in order for auto-boxing to work. I don't think you can replace java.lang without a fair bit of native code to make things like threads work.
In theory, "yes" it is possible, though you might also need to implement your own JVM! The relationships between the JVM and some of the low level classes (Object, Class, Thread, etc) are such that these classes have to be implemented as part of the JVM.
In practice, it is such a big task that you'd be working on it for the rest of your life, and the chances are that nobody would use your code even if you succeeded. That doesn't sound like "fun" to me.
Such thing breaks some license?
Not per-say. But if you ever tried to release it calling it "Java", Sun's lawyers would be after you for trademark infringement. You can only legally call your implementation Java if it has been validated against the Sun TCK.
But I don't want to be totally discouraging. If you want to hack on the internals of a JVM or stuff like that, the JNode project is always looking for keen new people.
No, it is not feasible at all. I mean, sure, you could do it, but you aren't going to do it better than a large corporation or open source project with years of experience and large numbers of Java gurus. It might be fun to geek it up though.