I am maintaining a Java application where we're constantly adding new features (changes in the api). I want to move towards using OpenAPI as a way to document the api. I see two schools of thought:
Write the code, use some annotations to generate the OpenAPI spec.
Write the OpenAPI, use it to generate some server code.
While both seem fine and dandy, the server code is simply stubbed out, and would then require a lot of manual plugging in of services. While that seems fine as a one time cost, then next time I update the interface, it seems to me the only two options are
Generate them all again, re-do all the manual wiring.
Hand edit the previously generated classes to match the new spec file (potentially introducing errors).
Am I correct with those options? If so, it seems that using the code to generate the api spec file is really the only sane choice.
I would recommend an API First approach where you describe your API in the yaml file and generate with each new addition.
Now how do you deal with generator overwriting manual work?
You could use inheritance to create models and controllers based on the code that is generated.
You can also use the .ignore file provided with the generator to if you want to be sure of files not being overwritten.
Related
I want Nutch to select specific URLs according to my own rules. This step is done at generate time. I know how to write parser/indexer plugin. But How to do it at generate time. My Nutch version is 2.3 series
The Nutch generator is not really an extension point in Nutch, so you are not able of writing plugins to customize it. Nevertheless, nothing stops you from writing your own generator with your own logic.
You would need to adjust the bin/nutch and bin/crawl scripts in order to call your own generator instead of the default one. Keep in mind that some other parts of Nutch rely on some parts of the generator implementation (SegmentMerger for instance). If you customize these parts, then you'll need to update some other classes as well.
The generator uses the ScoringFilter.generatorSortValue() method when is deciding which elements to return. So, this is one alternative that doesn't require changing the generator.
Side note, this is not entirely uncommon to do, I've seemed some clients requiring customized generators.
As suggested by Jorge, you could write a scoringfilter to assign scores to pages based on your own logic and filter during the generation step based on that. Alternatively, if by chance your selection rules can be determined based on the URL alone, you could have a bespoke URL normaliser used with a scope of generate (or whatever the value is) which would rewrite the URLs into something that the URL filters would then discard. You'd need to activate the filtering as part of the generate step. This is an ugly hack.
Nutch 2.x is really awkward and I am not sure you could create a copy of your table based on a filter of the original one.
What Gora backend do you use?
StormCrawler is a lot more flexible for this and we've recently added a mechanism for filtering URLs at the spout level, which is exactly what you'd need. You could do a similar thing in Nutch 2.x but that would probably mean changing things in GORA as well.
I'm trying to figure out the best way to have my API documentation be the source of truth and use it to validate the actual Java REST code ideally through integration testing or something of that sort. We're using the contract first or consumer contract type of approach, so we don't want the documentation to be generated from annotated code necessarily and updating every time a developer makes a change.
One thought has been to use Swagger, but I'm not sure how best to make it be used for validating the API. Ideally, it'd be good to have the validation occur in the build or integration testing process to see if the real response (and request if possible) match what's expected. I know there are a lot of uses and tools for Swagger and just trying to wrap my head around it. Or if there is a better alternative to work with Java code.
Recently, we (swagger-codegen community) start adding automatic test case generation to API clients (C#, PHP, Ruby). We've not added that to Java yet. Here are some example test cases generated by Swagger-Codegen for C#:
https://github.com/swagger-api/swagger-codegen/tree/master/samples/client/petstore/csharp/SwaggerClient/src/IO.Swagger.Test
It's still very preliminary and we would like to hear feedback from you to see if that's what you're looking for.
I think you should try swagger-request-validator:
https://bitbucket.org/atlassian/swagger-request-validator
Here are some examples how to use it:
https://bitbucket.org/atlassian/swagger-request-validator/src/master/swagger-request-validator-examples/
Another alternative is assertj-swagger:
https://github.com/RobWin/assertj-swagger
You may want to look at Spring Cloud Contract. It offers you a DSL, where you can describe the scenarios (more or less what is the response I get for a given request) and it seems to fit well to what you described as a requirement...
If you're using the Spring Framework, I'd highly recommend checking out Spring RestDocs which allow you to generate
I know this topic may have been discussed here regarding making java annotations that have logic functions and do specific actions based on conditions.
One of the famous examples of course are junit and hibernate.
I have also seen annotations that when you place on an api of a web service controller that it checks the header for authentication token and if the user was not authorized it would return unauthorized and would not even enter this api.
Also i have seen an android library that does most of the normal application logic with annotations: http://androidannotations.org/ .
Now all of the tutorials i have seen in the internet regarding this topic don't give clear examples for how to implement it with least code and i find in the end that extra code is written which conflicts with the main purpose of using annotations with logic which is saving time in writing more code.
Take for example in this reference http://androidannotations.org/
#NoTitle
is equivalent to
requestWindowFeature(Window.FEATURE_NO_TITLE);
in this example they seem just to inject their annotation library , they haven't changed any other thing or added any extra code like for example changing the base class which is activity.
Are things just abstracted too much?
And if so how can i reach this level of abstraction to make something like the android library i mentioned above.
Any design patterns recommended for this?
The example that you mentioned, i.e, http://androidannotations.org/ in fact a good implementation of the annotations.
In your example, the Android runtime must be assigning the values to the properties(objects) during runtime, based on the Annotation specified. Methods also can be picked up for execution based on the annotations specified on them.
Annotations, is a simple but powerful concept in Java. You can simplify the usage of your api to a large extent.
Check this post https://devcompass.com/2016/05/08/java-annotations-converting-java-objects-to-excel-data/ for information on how to create annotations from beginning. Checkout the source code zip file at the end of the page.
Trust me, annotations are very simple to learn and they can make a big difference in the source code implementation.
I'm currently debugging some fairly complex persistence code, and trying to increase test coverage whilst I'm at it.
Some of the bugs I'm finding against the production code require large, and very specific object graphs to reproduce.
While technically I could sit and write out buckets of instantiation code in my tests to reproduce the specific scenarios, I'm wondering if there are tools that can do this for me?
I guess specifically I'd like to be able to dump out an object as it is in my debugger frame (probably to xml), then use something to load in the XML and create the object graph for unit testing (eg, xStream etc).
Can anyone recommend tools or techniques which are useful in this scenario?
I've done this sort of thing using ObjectOutputStream, but XML should work fine. You need to be working with a serializable tree. You might try JAXB or xStream, etc., too. I think it's pretty straightforward. If you have a place in your code that builds the structure in a form that would be good for your test, inject the serialization code there, and write everything to a file. Then, remove the injected code. Then, for the test, load the XML. You can stuff the file into the classpath somewhere. I usually use a resources or config directory, and get a stream with Thread.currentThread().getContextClassLoader().getResourceAsStream(name). Then deserialize the stuff, and you're good to go.
XStream is of use here. It'll allow you to dump practically any POJO to/from XML without having to implement interfaces/annotate etc. The only headache I've had is with inner classes (since it'll try and serialise the referenced outer class).
I guess all you data is persisted in database. You can use some test data generation tool to get your database filled with test data, and then export that data in form of SQL scripts, and then preload before your intergration test starts.
You can use DBUnit to preload data in your unit test, it has also a number of options to verify database structure/data before test starts. http://www.dbunit.org/
For test data generation in database there is a number of comercial tools you can use. I don't know about any good free tool that can handle features like predefined lists of data, random data with predefined distribution, foreign key usage from other table etc.
I don't know about Java but if you change the implementations of your classes then you may no longer be able to deserialize old unit tests (which were serialized from older versions of the classes). So in the future you may need to put some effort into migrating your unit test data if you change your class definitions.
I am considering starting a project which is used to generate code in Java using annotations (I won't get into specifics, as it's not really relevant). I am wondering about the validity and usefulness of the project, and something that has struck me is the dependence on the Annontation Processor Tool (apt).
What I'd like to know, as I can't speak from experience, is what are the drawbacks of using annotation processing in Java?
These could be anything, including the likes of:
it is hard to do TDD when writing the processor
it is difficult to include the processing on a build system
processing takes a long time, and it is very difficult to get it to run fast
using the annotations in an IDE requires a plugin for each, to get it to behave the same when reporting errors
These are just examples, not my opinion. I am in the process of researching if any of these are true (including asking this question ;-) )
I am sure there must be drawbacks (for instance, Qi4J specifically list not using pre-processors as an advantage) but I don't have the experience with it to tell what they are.
The ony reasonable alternative to using annotation processing is probably to create plugins for the relevant IDEs to generate the code (it would be something vaguely similar to override/implement methods feature that would generate all the signatures without method bodies). However, that step would have to be repeated each time relevant parts of the code changes, annotation processing would not, as far as I can tell.
In regards to the example given with the invasive amount of annotations, I don't envision the use needing to be anything like that, maybe a handful for any given class. That wouldn't stop it being abused of course.
I created a set of JavaBean annotations to generate property getters/setters, delegation, and interface extraction (edit: removed link; no longer supported)
Testing
Testing them can be quite trying...
I usually approach it by creating a project in eclipse with the test code and building it, then make a copy and turn off annotation processing.
I can then use Eclipse to compare the "active" test project to the "expected" copy of the project.
I don't have too many test cases yet (it's very tedious to generate so many combinations of attributes), but this is helping.
Build System
Using annotations in a build system is actually very easy. Gradle makes this incredibly simple, and using it in eclipse is just a matter of making a plugin specifying the annotation processor extension and turning on annotation processing in projects that want to use it.
I've used annotation processing in a continuous build environment, building the annotations & processor, then using it in the rest of the build. It's really pretty painless.
Processing Time
I haven't found this to be an issue - be careful of what you do in the processors. I generate a lot of code in mine and it runs fine. It's a little slower in ant.
Note that Java6 processors can run a little faster because they are part of the normal compilation process. However, I've had trouble getting them to work properly in a code generation capacity (I think much of the problem is eclipse's support and running multiple-phase compiles). For now, I stick with Java 5.
Error Processing
This is one of the best-thought-through things in the annotation API. The API has a "messenger" object that handles all errors. Each IDE provides an implementation that converts this into appropriate error messages at the right location in the code.
The only eclipse-specific thing I did was to cast the processing environment object so I could check if it was bring run as a build or for editor reconciliation. If editing, I exit. Eventually I'll change this to just do error checking at edit time so it can report errors as you type. Be careful, though -- you need to keep it really fast for use during reconciliation or editing gets sluggish.
Code Generation Gotcha
[added a little more per comments]
The annotation processor specifications state that you are not allowed to modify the class that contains the annotation. I suspect this is to simplify the processing (further rounds do not need to include the annotated classes, preventing infinite update loops as well)
You can generate other classes, however, and they recommend that approach.
I generate a superclass for all of the get/set methods and anything else I need to generate. I also have the processor verify that the annotated class extends the generated class. For example:
#Bean(...)
public class Foo extends FooGen
I generate a class in the same package with the name of the annotated class plus "Gen" and verify that the annotated class is declared to extend it.
I have seen someone use the compiler tree api to modify the annotated class -- this is against spec and I suspect they'll plug that hole at some point so it won't work.
I would recommend generating a superclass.
Overall
I'm really happy using annotation processors. Very well designed, especially looking at IDE/command-line build independence.
For now, I would recommend sticking with the Java5 annotation processors if you're doing code generation - you need to run a separate tool called apt to process them, then do the compilation.
Note that the API for Java 5 and Java 6 annotation processors is different! The Java 6 processing API is better IMHO, but I just haven't had luck with java 6 processors doing what I need yet.
When Java 7 comes out I'll give the new processing approach another shot.
Feel free to email me if you have questions. (scott#javadude.com)
Hope this helps!
I think if annotation processor then definitely use the Java 6 version of the API. That is the one which will be supported in the future. The Java 5 API was still in the in the non official com.sun.xyz namespace.
I think we will see a lot more uses of the annotation processor API in the near future. For example Hibernate is developing a processor for the new JPA 2 query related static meta model functionality. They are also developing a processor for validating Bean Validation annotations. So annotation processing is here to stay.
Tool integration is ok. The latest versions of the mainstream IDEs contain options to configure the annotation processors and integrate them into the build process. The main stream build tools also support annotation processing where maven can still cause some grief.
Testing I find a big problem though. All tests are indirect and somehow verify the end result of the annotation processing. I cannot write any simple unit tests which just assert simple methods working on TypeMirrors or other reflection based classes. The problem is that one cannot instantiate these type of classes outside the processors compilation cycle. I don't think that Sun had really testability in mind when designing the API.
One specific which would be helpful in answering the question would be as opposed to what? Not doing the project, or doing it not using annotations? And if not using annotations, what are the alternatives?
Personally, I find excessive annotations unreadable, and many times too inflexible. Take a look at this for one method on a web service to implement a vendor required WSDL:
#WebMethod(action=QBWSBean.NS+"receiveResponseXML")
#WebResult(name="receiveResponseXML"+result,targetNamespace = QBWSBean.NS)
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public int receiveResponseXML(
#WebParam(name = "ticket",targetNamespace = QBWSBean.NS) String ticket,
#WebParam(name = "response",targetNamespace = QBWSBean.NS) String response,
#WebParam(name = "hresult",targetNamespace = QBWSBean.NS) String hresult,
#WebParam(name = "message",targetNamespace = QBWSBean.NS) String message) {
I find that code highly unreadable. An XML configuration alternative isn't necessarily better, though.