I am developing an Android application with my friend. I am currently responsible for the backend while she is working on the Android part. The backend is developed in Java using Lambda functions running in AWS Amazon Cloud‎. The frontend and the backend are totally decoupled (Lambda functions are exposed via REST APIs) except for the POJOs used on both sides. POJOs are serialized by the application into JSON when calling an API and deserialized again into POJOs (very same ones) by the backend when handling API requests.
We want to keep POJOs on both sides exactly the same for obvious reasons but we are wondering what the proper way to do it is. We see the following two options:
1) Simply copy code on both sides. This has the disadvantage of changing common code independently which, sooner or later, will lead to a misallignment.
2) Move POJOs out to a separate library and include it as a dependency on both sides. This seems like a more proper way to solve this issue but how do we ensure that both me and my friend know that a POJO has been changed? Let's say I remove one field from a POJO and create a new version of the shared library. I push changes to our repository and then... tell my friend that I made some changes so she should pull them, build the new version and include it in her project?
Is there a different (better) way to address this issue? Currently the backend is built with Maven but I can switch to Gradle if this would help automate things and make our code consistent (Android Studio forces Gradle builds).
I found similar questions of other people but they were either a bit different or remained unanswered:
Sharing POJOs between Android project and java backend project
Sharing one java library between Android and Java backend (gradle)
Sharing code between Java backend and Android app
There are certainly lots of other ways of doing this and better or not; I will leave that to you to consider.
But before going to sharing the POJOs, I ask you to take a step backwards and take a look at your architecture. You have essentially got:
a Java Backend with REST APIs, supporting JSON payload
an Android Application, capable of making REST calls and deserialising the JSON payloads.
If you note, above, the tech stack does not involve POJO on any level.
You see what I mean? POJO is an implementation detail for you and it is not wise to share it among your components.
How about looking into the future where you add more components to your architecture, say:
iOS application
Kotlin support for Android application
Will your inclination to share POJO code still be intact? Perhaps not.
From what I see, you should design and develop for a REST backend and a REST capable client. Thats all. That should be the bottomline.
So with that, coming back to your requirements of sharing the updates between the backend and the client, you can share the JSON schema between the two, instead of sharing the POJOs. And thereafter, employ an automated system (say, a simple script) to generate POJOs in the backend and the client.
This approach can have certain benefits. For instance:
You will be able to share updates now and in the future, as per your requirements.
This makes your modularity (or decoupling) better too because the backend and the client is not bound by the requirements to use POJOs. For instance, you can use Data class if you decide to use Kotlin in your client.
You can use versioned schema for future, for the times where the client cannot keep up with the backend, or the backend needs to update independently.
and more
Adding to the answer above, I would take advantage of the fact that both languages use Java compilers and apis. Whether the front end uses Java or Kotlin, you can call any of these api libraries directly from your code.
One api in particular, Json-B, provides methods for transforming your Java (or Kotlin) objects into Json for transport, then transforming the Json response back into Java/ Kotlin on the other end.
One caveat: I recently heard that at least parts of the javax.* package were scheduled for deprecation. They should work on Java 14 or lower, but if you are planning on updating in the future, this is something that you will want to consider.
For Java versions 9 or newer, you should also read this first. It will save you some time.
EDIT: Json-B is, in fact, disabled by default in newer Java versions (the package is included but 'hidden'), but the last article linked in the paragraph above talks about acceptable workarounds. IMO it is still the preferred option for working with Json in Java.
Related
I am a big believer in composition over inheritance as well as libraries over frameworks (which is extremely similar to composition since libraries are composed). This code here from the gRPC java example is telling me to some degree that java-grpc is a framework in addition to being just a library(which is fine, nothing wrong with that)...
Server server = ServerBuilder.forPort(port)
.addService(new GreeterImpl())
.build()
.start();
I am trying to have my webserver(webpieces) receive requests on these urls
https://host/grpc -> webpieces would just re-use the grpc library(hopefully) for encoding/decoding
https://host/json/{grpcMethod} -> webpieces would grab the grpcMethod and decode the json into the grpc object to call the service here
(edit for more clarity based on first answer)...
Key Point -> Json should come in AND use the same protobuf objects. I should not have to create json pojo as seen in this json grpc blow post JSON grpc
I really need a java-gRPC api that I can interact with such that I can ask the library to encode/decode in both directions. This then gives customers the option to do gRPC on our public API or use JSON if they prefer.
I am starting to think with java-grpc, this is not possible since it's a framework rather than just a library(I mean it's a library of course too but since I have to plugin to it rather than use it, it is also a framework).
I did spend a good amount of time today starting to create a new io.grpc.ServerProvider and a new io.grpc.ManagedChannelProvider but wow, there is way way more to implement than I would have suspected for simply encode/decode these objects for me which is what I was looking for.
One last question too since gRPC is over http/2. I don't see a location in client or server in java-grpc to supply the url path that should be hit?
webpieces actually supports streaming in both directions like gRPC up to a point(the http frontend 100% supports it). We have some work to do to drive it down into the controllers which we will do at some point(at which point, we can server web pages, gRPC, and JSON all on the same serverless server we have setup).
BONUS POINTS: Webpieces like the old 'java' playframework ALSO supports hot compile when developing so it would be VERY IDEAL if we can have the gRPC server under webpieces instead of the gRPC server being on top killing off the no-restart development we have going on.
thanks for any tips there!
Dean
I still think gRPC is more like a library.
Anyways, the JSON blog post will give you a good idea how to serialize and deserialize request/responses. Also, it shows you how to manually provide MethodDescriptors and ServerServiceDefinitions for defining method and service including the routing.
Protobuf is the default serialization for gRPC. So, using protobuf is much easier. You don't normally need to worry about those low level APIs. Nonetheless, in the core, it is not strongly depends on specific serialization as you can see from the blog post.
I am exploring options how to convert my existing java application which has Sprint framework to Azure Cloud server less "Logic App" and "Functions" concept
without re-writing.
As far as i can see, there is no accurate information in microsoft websites.
Can any one please suggest me on the road map, that, how i must lift and shift my java spring frame work to Azure Function?
I am aware of the fact that there will be little modifications that i might need to do, that is ok with me. But not a complete re-write into some other language.
Thanks !
Hard to say exactly but with the new Java support as long any package you are using can be resolved with Maven it should work in Functions. The potential bigger question is what can remain as-is. Likely each of your controller methods would become Azure Functions - the method signature would change but the code inside should remain largely the same (HttpTrigger with a request message to a route). The models should be able to stay untouched. If you had any orchestration or workflows that would become orchestrated by Logic Apps - but since Logic Apps has no-code, it would mean re-creating that workflow/orchestration in Logic Apps.
Java is still in preview so you may hit some snags here and there, but let us know if you have any other questions along the way.
It's not a very concrete question. I created a simple project with a help of this tutorial, it's really fine. All the GWT code samples related to JSON I saw so far seem to work with a JSON (or immitate this work with some mock-up JSON), that is retrieved and processed in GWT. I'm a newbie in GWT, and I wonder, what are the cases of interacting with services that return JSON (services are mentioned in the same tutorial) and what are the pros and cons of such interaction.
I thought about two options (well, service is an overloaded term):
everything that is mentioned in these JSON GWT tutorials is about the third-party services, like GData and Yahoo! Web Services mentioned here, which would make sense, cause it's about retrieving some data and processing in the app,
and the second option is about services, that are created within the confines of a project (and, if there are some cases, and definitely there are some, my question is about them).
It's probably can't be fully explained in the answer, so a link (or a few) would be appreciated. Thanks in advance.
Your question is really quite generic. But here are some pointers:
JSON is just a data interchange format similar to XML or Protocol Buffers or some other proprietary format.
They are necessary in modern web applications because the UI is entirely controlled by the javascript code running in the browser.
However the data that a web application presents to the user usually resides on the backend. In order to get the data from the backend to the frontend you have to use some data interchange format like JSON or XML.
The advantage of JSON is, that is fairly efficient compared to XML and well accepted.
As you mentioned there are third party services that rely on JSON. These are very useful when you want to include the services in your applications.
The biggest advantage of applying this service oriented approach to your own project is that you decouple your components (frontend and backend). By doing this you achieve following things:
Make your services available to other (web-)applications and users because your service exposes a specific API/data exchange format that they can use.
Easily replace or add another frontend (for example create a desktop application in addition to your GWT application) that can work with your data (display or modify).
There seems to be an on-going philosophical debate about exactly how to version a REST web service. For me though, the number one issue is practical question regarding how easy/hard is it to implement and maintain in a Java servlet based backend. My company is building a new REST web service and while we are not currently concerned with versioning it at this time, I do not want to make an architectural decision that would paint us into a corner.
I guess the primary decision we have to make right now is should we put the version identifier in our URI or in the media type (or both). In case its relevant, we will mint only a few new media types. Also the application has 50+ resource URIs.
What are the pros and cons of each approach relative to implementing them in our Java servlet?
My initial thoughts:
1) I like the idea of versioning the media types (e.g. "application/vnd.mystuff+xml;version=1.0"), because it feels the same as versioning the XML schema for a SOAP/RPC web service. Also, it seems like conneg was designed just for this sort of thing. However, It seems like implementing this would be difficult if not impossible because our application is created with domain driven design principals and we can't just create new versions Java classes and have them sit along side the old ones. So we'd have to implement the new Java classes and then hand code JSON/XML serialization/deserialization to support all the old versions of media types. I guess this is no different than versioning XML Schema for SOAP...
2) On the other hand, if we were to version the URI's (e.g. "http://ourapp.com/v1/mystuff"), then our customers could deploy two entire versions of the system and stand them up as two separate servlets with different context mappings. However, we'd have modify the JPA (ORM) mappings of the v1 Java classes to use the v2 RDBMS schema. This may not always be simple, but I can easily understand how it could be done. This approach bugs me however because "application/vnd.mystuff+xml" will return two different XML models corresponding to two different XML schemas. But maybe it doesn't matter as the client knows what it asked for because it used the v1 or v2 URI...
Or is there an easier to implement, alternate approach we are not considering? Or is it ok to punt on this entire issue and worry about it later?
If the restful services are used by mobile applications then it would a bit difficult to bake in the processing of both responses. In my opinion keeping the version in the URL is the better way as this would be easier to decommission the old endpoints when convenient.
I'm working on a java problem that (at least is trying) to utilize the twitter API, however, it is my first project using any type of API and I am a little confused. What is the benefit of using a java library for the twitter API such as Twitter4J and how would one go about not using one? I'm a little fuzzy on the topic of APIs in general and I'm not finding anything in my searches that really makes it clear how to use one.Do I need to use a Java library or can I do it without one? what are the pros and cons of using one vs not using one. I am relatively new to this and am having some issues. Any help?
First what an API is:
An application programming interface (API) is a particular set of
rules ('code') and specifications that software programs can follow to
communicate with each other. It serves as an interface between
different software programs and facilitates their interaction, similar
to the way the user interface facilitates interaction between humans
and computers. An API can be created for applications, libraries,
operating systems, etc., as a way of defining their "vocabularies" and
resources request conventions (e.g. function-calling conventions). It
may include specifications for routines, data structures, object
classes, and protocols used to communicate between the consumer
program and the implementer program of the API
The use of the Twitter4J API would allow you to easily call commands that do complex operations, such as get tweets as they are coming in. For projects such as this, using an API is best way to go about it as you are also going to be required to get an access key which allows you permission to use the API.
Examples using Twitter4J: http://twitter4j.org/en/code-examples.html
You need to distinguish between an "API" and a "Library"
You NEED the Twitter API: it's the thing that connects twitter to your code. You can use this to send a "post this to my account" command for instance.
You CAN use a library: it helps your code talk to the api, by doing some of the work for you. You can call a function with only a string as parameter, and this function calls the forementioned send-to-twitter API
You can ofcourse say things like that the library has an API, but this would be confusing the situation a bit.
In the end it is quite nice to use the library because it helps you by writing code in your language.