How to interface with an API from android? - java

I built an android app and have built a corresponding node.js back-end to store data from it. I've got some API end-points created along with methods to save and retrieve data from a MongoDB.
I understand how to make https requests from java/android, but I'm not sure how things should ideally be set-up and structed. Is it best practice to create a new package for api related things? Like for example, a package like:
API Package
Class containing GET requests
Class containing POST requests
These classes might contain methods like:
public static String getItem(int item_id) {
//HTTPS boiler plate
//HTTPS GET request and process data
return result;
Or is it best to create a class to handle and make HTTPS requests and then implement all of the required API methods in a single, separate class?

There are several popular libraries out there that you can check it out like Retrofit or Android Volley. I recommend you not to write the code to make requests to the backend yourself because things like parsing JSON data shouldn't be done manually as everything will be hard to manage when your project grows.
To structure the code, it depends on personal preference. I personally use Retrofit so I keep all network requests in an interface. For example:
interface APIs {
#GET("api/user")
Call<Users> getUsers();
#GET("api/comment")
Call<Comments> getComments();
#GET("api/photos")
Call<Photos> getPhotos();
}
But if you want to learn how things should be set-up and structured nicely, I suggest you read about the Android architecture component and Clean Code architecture where code logics are kept separately and grouped in a manageable way.

Related

How to properly use Android Room in background tasks (unrelated to UI)

At the moment Room is working well with a DB to UI integration:
Dao for DB operations
Repository for interacting with the Daos and caching data into memory
ViewModel to abstract the Repository and link to UI lifecycle
However, another scenario comes up which I am having a hard time understanding how to properly implement Room usage.
I have a network API that is purely static and constructed as a reflection of the servers' REST architecture.
There is a parser method that walks through the URL structure and translates it to the existing API via reflection and invokes any final method that he finds.
In this API each REST operation is represented by a method under the equivalent REST naming structure class, i.e.:
/contacts in REST equates to Class Contacts.java in API
POST, GET, DELETE in rest equates to methods in the respective class
example:
public class Contacts {
public static void POST() {
// operations are conducted here
}
}
Here is my difficulty; how should I integrate ROOM inside that POST method correctly/properly?
At the moment I have a makeshift solution which is to instantiate the repository I need to insert data into and consume it, but this is a one-off situation everytime the method is invoked since there is absolutely no lifecycle here nor is there a way to have one granular enough to be worthwhile having in place (I don't know how long I will need a repository inside the API to justify having it cached for X amount of time).
Example of what I currently have working:
public class Contacts {
public static void POST(Context context, List<Object> list) {
new ContactRepository(context).addContacts(list);
}
}
Alternatively using it as a singleton:
public class Contacts {
public static void POST(Context context, List<Object> list) {
ContactRepository.getInstance(context).addContacts(list);
}
}
Everything works well with View related Room interaction given the lifecycle existence, but in this case I have no idea how to do this properly; these aren't just situations where a view might call a network request - then I'd just use networkboundrequest or similar - this can also be server sent data without the app ever requesting it, such as updates for app related data like a user starting a conversation with you - the app has no way of knowing that so it comes from the server first.
How can this be properly implemented? I have not found any guide for this scenario and I am afraid I might be doing this incorrectly.
EDIT: This project is not Kotlin as per the tags used and the examples provided, as such please provide any solutions that do not depend on migrating to Kotlin to use its coroutines or similar Kotlin features.
Seems like using a Singleton pattern, like I was already using, is the way to go. There appears to be no documentation made available for a simple scenario such as this one. So this is basically a guessing game. Whether it is a bad practice or has any memory leak risks I have no idea because, again, there is just no documentation for this.

how do I migrate a Java interface to a microservice?

I am looking at microservices, and the possibility of migrating some of our code to this architecture. I understand the general concept but am struggling to see how it would work for our example.
Supposing I have an interface called RatingEngine and an implementation called RatingEngineImpl, both running inside my monolithic application. The principle is simple - The RatingEngineImpl could run in a different machine, and be accessed by the monolithic application via (say) a REST API, serializing the DTOs with json over http. We even have an interface to help with this decoupling.
But how do I actually go about this? As far as I can see, I need to create a new implementation of the interface for the rump monolith (ie now the client), which takes calls to the interface methods, converts them into a REST call, and sends them over the network to the new 'rating engine service'. Then I also need to implement a new http server, with an endpoint for each interface method, which then deserializes the DTOs (method parameters) and routes the call to our original RatingEngineImpl, which sits inside the server. Then it serializes the response and sends it back to the client.
So that seems like an awful lot of plumbing code. It also adds maintenance overhead, since if you tweak a method in the interface you need to make changes in two more places.
Am I missing something? Is there some clever way we can automate this boilerplate code construction?
The Microservice pattern does not suggest you move every single service you have to it's own deployable. Only move self sustaining pieces of logic that will benefit from it's own release cycle. I.e. if your RatingEngine needs rating-logic updates weekly, but the rest of your system is pretty stable - it will likely benefit from beeing a service of it's own.
And yes - Microservices adds complexity, but not really boiler plate code of HTTP servers. There are a lot of frameworks around to deal with that. Vert.x is one good. Others are Spring Boot, Apache Camel etc. A complete microservice setup could look like this with Vert.x.
public class RatingService extends AbstractVerticle implements RatingEngine{
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "application/json")
.end(computeCurrentRating().encodePrettily());
}).listen(8080);
}
#Override
public int getRating(){
  return 4; // or whatever.
}
protected JsonObject computeCurrentRating(){
return new JsonObject().put("rating", getRating());
}
}
Even the Java built-in framework JAX-RS helps making a microservice in not too many lines of code.
The really hard work with microservices is to add error-handling logic in the clients. Some common pitfalls
Microservice may go down If call to RatingService gives connection refused exception - can you deal with it? Can you estimate a "rating" in client to not prevent further processing? Can you reuse old responses to estimate the rating? .. Or at least - you need to signal the error to support staff.
Reactive app? How long can you wait for a response? A call to in memory methods will return within nano seconds, a call to an external HTTP service may take seconds or minutes depending on a number of factors. As long as the application is "reactive" and can continue to work without a "Rating" - and present the rating for the user once it's available - it's fine. If you are waiting for a blocking call to rating service, more than a few millisec. - response time becomes an obstacle. It's not as convenient/common to make reactive apps in Java as in node.js. A reactive approach will likely trigger a remake of you entire system.
Tolerant client Unit/integration testing a single project is easy. Testing a complex net of microservices is not. The best thing you can do about it is to make your client call less picky. Schema validations etc. are actually bad things. In XML use single XPaths to get data you want from the response, not more not less. That way, a change in the microservice response will not require updates of all clients. JSON is a bit easier to deal with than XML in this aspect.
No, unfortunately you do not miss anything substantial. The microservice architecture comes with its own cost. The one that caught your eye (boilerplate code) is one well-known item from the list. This is a very good article from Martin Fowler explaining the various advantages and disadvantages of the idea. It includes topics like:
added complexity
increased operational maintance cost
struggle to keep consistency (while allowing special cases to be treated in exceptional ways)
... and many more.
There are some frameworks out there to reduce such a boilerplate code. I use Spring Boot in a current project (though not for microservices). If you already have Spring based projects, then it really simplifies the development of microservices (or any other not-Microservice-application based on Spring). Checkout some of the examples: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples

Receiving and processing 3rd party Posts in Tapestry app

The goal of my app is to create a leaderboard for a competition. To add to one's score, you just have to write something in hipchat (I already have a listener in hipchat that attempts to make a post to my Tapestry app).
I am running into lots of trouble around accepting and handling a 3rd party POST to my Tapestry app. All the documentation I can find deals with internal requests.
Does anyone have any experience in setting up a way to receive a 3rd party post, handle it and make actions with the information? Any help would be great!
Tapestry's native POST handling is intended for handling HTML form submits and is not a good match for machine initiated REST requests. Consequently, I'd handle it as a REST resource request, which JAX-WS is meant for. I assume you mean Tapestry 5 and if so, it's pretty to get started with Tynamo's tapestry-resteasy module (for disclosure, I'm one of the maintainers). If you are new to JAX-WS, you may want to read an overview about it (the link is for Jersey, the reference implementation but the annotations work the same way regardless of implementation). In principle, you'd implement a (POJO+annotations) resource class and an operation with something like this:
#POST
#Produces({"application/json"})
public Response scorePoints(User user, long score)
{
leaderboardService.add(user, score);
return Response.ok().build();
}
On the client side, you'd just pass in the user ID and Tapestry's type coercion would handle the rest (assuming User is a known entity to Tapestry). Of course, you could just use primitive data types on both sides as well.

Rails application, but all data layer is using a json/xml based web service

I have a web service layer that is written in Java/Jersey, and it serves JSON.
For the front-end of the application, I want to use Rails.
How should I go about building my models?
Should I do something like this?
response = api_client.get_user(123)
User user = User.new(response)
What is the best approach to mapping the JSON to the Ruby object?
What options do I have? Since this is a critical part, I want to know my options, because performance is a factor. This, along with mapping JSON to a Ruby object and going from Ruby object => JSON, is a common occurance in the application.
Would I still be able to make use of validations? Or wouldn't it make sense since I would have validation duplicated on the front-end and the service layer?
Models in Rails do not have to do database operation, they are just normal classes. Normally they are imbued with ActiveRecord magic when you subclass them from ActiveRecord::Base.
You can use a gem such as Virtus that will give you models with attributes. And for validations you can go with Vanguard. If you want something close to ActiveRecord but without the database and are running Rails 3+ you can also include ActiveModel into your model to get attributes and validations as well as have them working in forms. See Yehuda Katz's post for details on that.
In your case it will depend on the data you will consume. If all the datasources have the same basic format for example you could create your own base class to keep all the logic that you want to share across the individual classes (inheritance).
If you have a few different types of data coming in you could create modules to encapsulate behavior for the different types and include the models you need in the appropriate classes (composition).
Generally though you probably want to end up with one class per resource in the remote API that maps 1-to-1 with whatever domain logic you have. You can do this in many different ways, but following the method naming used by ActiveRecord might be a good idea, both since you learn ActiveRecord while building your class structure and it will help other Rails developers later if your API looks and works like ActiveRecords.
Think about it in terms of what you want to be able to do to an object (this is where TDD comes in). You want to be able to fetch a collection Model.all, a specific element Model.find(identifier), push a changed element to the remote service updated_model.save and so on.
What the actual logic on the inside of these methods will have to be will depend on the remote service. But you will probably want each model class to hold a url to it's resource endpoint and you will defiantly want to keep the logic in your models. So instead of:
response = api_client.get_user(123)
User user = User.new(response)
you will do
class User
...
def find id
#api_client.get_user(id)
end
...
end
User.find(123)
or more probably
class ApiClient
...
protected
def self.uri resource_uri
#uri = resource_uri
end
def get id
# basically whatever code you envisioned for api_client.get_user
end
...
end
class User < ApiClient
uri 'http://path.to.remote/resource.json'
...
def find id
get(id)
end
...
end
User.find(123)
Basic principles: Collect all the shared logic in a class (ApiClient). Subclass that on a per resource basis (User). Keep all the logic in your models, no other part of your system should have to know if it's a DB backed app or if you are using an external REST API. Best of all is if you can keep the integration logic completely in the base class. That way you have only one place to update if the external datasource changes.
As for going the other way, Rails have several good methods to convert objects to JSON. From the to_json method to using a gem such as RABL to have actual views for your JSON objects.
You can get validations by using part of the ActiveRecord modules. As of Rails 4 this is a module called ActiveModel, but you can do it in Rails 3 and there are several tutorials for it online, not least of all a RailsCast.
Performance will not be a problem except what you can incur when calling a remote service, if the network is slow you will be to. Some of that could probably be helped with caching (see another answer by me for details) but that is also dependent on the data you are using.
Hope that put you on the right track. And if you want a more thorough grounding in how to design these kind of structures you should pick up a book on the subject, for example Practical Object-Oriented Design in Ruby: An Agile Primer by Sandi Metz.

Best way to implement the java facade/service concept in rails application

I'm new to the rails environment and come from a java enterprise web application background. I want to create a few classes that allow you to easily interface with an external application that exposes restful web services. In java I would simply create these as stateless java beans/facade's that return Data Transfer Objects which are nice usable objects instead of ugly xml maps/data. What is the best way to do this in Rails/Ruby? Here's my main questions:
Should the facade classes be static or should you instantiate them before using the service?
Where should the DTO's be placed?
Thanks,
Pierre
UPDATE: We ended up using services as explained in this answer: Moving transactional operations away from the controller
Code that doesn't fit as a model or controller lives in the lib folder. helpers is typically just for view related code that generates HTML or other UI related results.
I'd generally create them as regular classes that are instantiated and have instance methods to access the external rest service - this can make testing them easier. But this is really just a matter of preference (and also depends on how much state/reuse of those objects is required per request - depends on what you're doing exactly).
The "DTOs", would just be plain Ruby classes in this case - maybe even simple Struct instances if they don't have any logic in them. If they are Ruby classes, they'd live in app/models, but they wouldn't extend ActiveRecord::Base (or anything else)
You probably want to take a look at httparty
Here's an example of how you'd consume the twitter api.
# lib/twitter.rb
require 'httparty'
require 'hashie'
class Twitter
include HTTParty
base_uri 'twitter.com'
def timeline
self.class.get("/statuses/public_timeline.xml")["statuses"].map do |status|
Hashie::Mash.new(status)
end
end
end
# client code
client = Twitter.new
message = client.timeline.first
puts message.text
Notice that you don't have to create DTOs. httparty maps xml (take a look at http://dev.twitter.com/doc/get/statuses/public_timeline for the structure in this example) to Hashes and then Hashie::Mash maps them to methods, hence you can just do message.text. It even works recursively so you can do client.timeline.first.user.name.
If you're creating a rails project then I'd place twitter.rb in the lib folder.
If you'd rather use static methods you can do:
require 'httparty'
require 'hashie'
class Twitter
include HTTParty
base_uri 'twitter.com'
def self.timeline
get("/statuses/public_timeline.xml")["statuses"].map do |status|
Hashie::Mash.new(status)
end
end
end
# client code
message = Twitter.timeline.first
puts message.text

Categories

Resources