Consider this line of jsp code:
function clearCart(){
cartForm.action="cart_clear?method=clear";
cartForm.submit();
}
Clearly it's trying to call a method on the back end to clear the cart. My question is how does the service (Tomcat most likely, correct me if I'm wrong) which hosts this site that contains this snippet of code know how and where to find this method, how it "indexes" it with string values etc. In my java file, the clear method is defined as:
public String clear( )
{
this.request = ServletActionContext.getRequest();
this.session = this.request.getSession();
logger.info("Cart is clearing...");
Cart cart = ( Cart ) this.session.getAttribute(Constants.SESSION_CART );
cart.clear();
for( Long id : cart.getCartItems().keySet() )
{
Item it = cart.getCartItems().get(id);
System.out.println( it.getProduct().getName() + " " + it.getNumber()
);
}
return "cart";
}
By which module/what mechanism does Tomcat know how to locate precisely that method? By copycatting online tutorials and textbooks I know how to write these codes, but I want to get a bit closer to the bottom of it all, or at least something very basic.
Here's my educated (or not so much) guess: Since I'm basing my entire project on struts, hibernate and spring, I've inadvertently/invariably configured the build path and dependencies in such ways that when I hit the "compile" button, all the "associating" and "navigating" are done by these framework, in other words, as long as I correctly configured the project and got spring etc. "involved" (sorry I can't think of that technical jargon that's on the tip of my tongue), and as long as I inherit a class or implement an interface, when compiling, the compiler will expose these java methods to the jsp script - it's part the work done by compiler, part the work done by the people who composed spring framework. Or, using a really bad analogy, consider a C++ project whereby you use a 3rd party library which came in compiled binary form, all you have to do is to do the right inclusion (.h/.hpp file) and call the right function and you'll get the function during run time when calling those functions - note that this really is a really bad analogy.
Is that how it is done or am I overthinking it? For example it's all handled by Tomcat?
Sorry for all the verbosity. Things get lengthy when you need to express slightly more complicated and nuanced ideas. Also - please go deep and go low-level don't go too deep, by that I mean you are free to lecture on how hibernate and spring etc. work, how its code is being run on a server, but try not to touch the java virtue machine, byte code and C++ pointers etc. unless of course, it is helpful.
Thanks in advance!
Tomcat doesn't do much except obey the Servlet specification. Spring tells Tomcat that all requests to http://myserver.com/ should be directed to Spring's DispatcherServlet, which is the main entry point.
Then it's up to Spring to further direct those requests to the code that handles them. There are different strategies for mapping a specific URL to the code that handles the request, but it's not set in stone and you could easily create your own strategy that would allow you to use whatever kind of URLs you want. For a simple (and stupid) example you could have http://myserver.com/1 that would execute the first method in a single massive handler class, http://myserver.com/2 would execute the second, etc.
The example is with Spring, but it's the same general idea with other frameworks. You have a mapper that maps an URL to the handler code.
These days it's all hidden under layers of abstraction so you don't have to care about the specifics of the mapping and can develop quickly and concentrate on the business code.
Related
I'm working on a servlet to perform some logic specific to a resourceType in sling and set information to the request to be accessible via the jsp then handing off the request to the jsp similarly to the first solution provided in this answer.
Here's some example code to represent my situation:
#SlingServlet(
resourceTypes="myapp/components/mycomponent",
methods="GET",
extensions={"html"}
)
...
#Reference
private ServletResolver serlvetResolver;
protected void doGet(....) {
setPropertiesToRequest();
Servlet servlet = servletResolver.resolveServlet(resource, "....jsp");
servlet.service(slingRequest, slingResponse);
clearPropertiesFromRequest();
}
Because of this, I've noticed that I've lost sling's selector handling (I've had to roll my own simpler version to determine which jsp to render. Full featured sling selector handling is described in more detail here). I wanted to reach out to the stack overflow community and ask what else I may be missing out on by depriving the default get handler of the request. I've scanned through the source code but I think there may be more going on.
Secondly, I'd be interested in thoughts on how and where this approach may impact performance of the request resolution.
Thanks, Thomas
Processing the business logic in Java and delegating to scripts for rendering sounds like a job for the recently released Sling Models. Using that should remove the need to implement your own handling of selectors, as those won't affect the model selection, only the rendering scripts.
Not sure what you are trying to achieve here, but the main problem seems to me that your SlingServlet handles the html extension and by itself does not have selectors to filter a bit more. Thus it of course intercepts all the requests to your component. Then you have to take care of the selectors again to be able to choose the correct JSP.
The question is, why do you use a SlingServlet for it when you anyway do the rendering by JSP?
Can't you implement your logic in the JSP or better in a bean referenced in the JSP?
In our company we use our custom tag that takes care of this, but there are public frameworks available from other Adobe Partner:
https://github.com/Cognifide/Slice
http://neba.io/index.html
I've used both Play1.x and Play2.x, but I didn't find how Play distributes its request to different actions in its source code.
e.g.
http://HOST:9000/Application/index
Play could find the controller Application, and then invoke its index method.
I thought Play works this way:
Get URI's first part Application and init Application using reflection.
Get the second part of URI, index, invoke index() of Application using reflection.
But I don't know where's the code exactly.
And, If it using a lot of reflection, how could it handle millions of request ? I think reflection is a lot of slower than direct method call(Or Play make some magic optimize ?).
Route file get compiled into target/scala-2.10/src_managed/main/routes_routing.scala file.
Even if reflection would be involved why should it be slow? File need to be reflected once at app startup.
The route file points each uri to a specific method.
For example:
GET /clients/:id controllers.Clients.show(id: Long)
If you don't care about routes type-safety incorporated in Play 2.x you can easily write custom resolver which will catch all unhandled routes with Dynamic parts spanning several / so using simple string operations + reflections you can access any controller/action combination you want...
Anyway consider if your app's security is worth of this sacrifice.
PS.: hard believing that samples are not required to this approach, in other case let me know, I'll write something in free time
I have a web service layer that is written in Java/Jersey, and it serves JSON.
For the front-end of the application, I want to use Rails.
How should I go about building my models?
Should I do something like this?
response = api_client.get_user(123)
User user = User.new(response)
What is the best approach to mapping the JSON to the Ruby object?
What options do I have? Since this is a critical part, I want to know my options, because performance is a factor. This, along with mapping JSON to a Ruby object and going from Ruby object => JSON, is a common occurance in the application.
Would I still be able to make use of validations? Or wouldn't it make sense since I would have validation duplicated on the front-end and the service layer?
Models in Rails do not have to do database operation, they are just normal classes. Normally they are imbued with ActiveRecord magic when you subclass them from ActiveRecord::Base.
You can use a gem such as Virtus that will give you models with attributes. And for validations you can go with Vanguard. If you want something close to ActiveRecord but without the database and are running Rails 3+ you can also include ActiveModel into your model to get attributes and validations as well as have them working in forms. See Yehuda Katz's post for details on that.
In your case it will depend on the data you will consume. If all the datasources have the same basic format for example you could create your own base class to keep all the logic that you want to share across the individual classes (inheritance).
If you have a few different types of data coming in you could create modules to encapsulate behavior for the different types and include the models you need in the appropriate classes (composition).
Generally though you probably want to end up with one class per resource in the remote API that maps 1-to-1 with whatever domain logic you have. You can do this in many different ways, but following the method naming used by ActiveRecord might be a good idea, both since you learn ActiveRecord while building your class structure and it will help other Rails developers later if your API looks and works like ActiveRecords.
Think about it in terms of what you want to be able to do to an object (this is where TDD comes in). You want to be able to fetch a collection Model.all, a specific element Model.find(identifier), push a changed element to the remote service updated_model.save and so on.
What the actual logic on the inside of these methods will have to be will depend on the remote service. But you will probably want each model class to hold a url to it's resource endpoint and you will defiantly want to keep the logic in your models. So instead of:
response = api_client.get_user(123)
User user = User.new(response)
you will do
class User
...
def find id
#api_client.get_user(id)
end
...
end
User.find(123)
or more probably
class ApiClient
...
protected
def self.uri resource_uri
#uri = resource_uri
end
def get id
# basically whatever code you envisioned for api_client.get_user
end
...
end
class User < ApiClient
uri 'http://path.to.remote/resource.json'
...
def find id
get(id)
end
...
end
User.find(123)
Basic principles: Collect all the shared logic in a class (ApiClient). Subclass that on a per resource basis (User). Keep all the logic in your models, no other part of your system should have to know if it's a DB backed app or if you are using an external REST API. Best of all is if you can keep the integration logic completely in the base class. That way you have only one place to update if the external datasource changes.
As for going the other way, Rails have several good methods to convert objects to JSON. From the to_json method to using a gem such as RABL to have actual views for your JSON objects.
You can get validations by using part of the ActiveRecord modules. As of Rails 4 this is a module called ActiveModel, but you can do it in Rails 3 and there are several tutorials for it online, not least of all a RailsCast.
Performance will not be a problem except what you can incur when calling a remote service, if the network is slow you will be to. Some of that could probably be helped with caching (see another answer by me for details) but that is also dependent on the data you are using.
Hope that put you on the right track. And if you want a more thorough grounding in how to design these kind of structures you should pick up a book on the subject, for example Practical Object-Oriented Design in Ruby: An Agile Primer by Sandi Metz.
Our team has been given a legacy system for further maintenance and development.
As this this is true "legacy" stuff, there is really, really small number of tests, and most of them are crap. This is an app with web interface, so there are both container-managed components as well as plain java classes (not tied to any framework etc.) which are "new-ed" here and there, whenever you want to.
As we work with this system, every time we touch given part we try to break all that stuff into smaller pieces, discover and refactor dependencies, push dependencies instead of pulling them in code.
My question is how to work with such system, to break dependecies, make code more testable etc? When to stop and how to deal with this?
Let me show you an example:
public class BillingSettingsAction {
private TelSystemConfigurator configurator;
private OperatorIdDao dao;
public BillingSettingsAction(String zoneId) {
configurator = TelSystemConfiguratorFactory.instance().getConfigurator(zoneId);
dao = IdDaoFactory.getDao();
...
}
// methods using configurator and dao
}
This constructor definitely does too much. Also to test this for further refactoring it requires doing magic with PowerMock etc. What I'd do is to change it into:
public BillingSettingsAction(String zone, TelSystemConfigurator configurator, OperatorIdDao dao) {
this.configurator = configurator;
this.dao = dao;
this.zone = zone;
}
or provide constructor setting zone only with setters for dependencies.
The problem I see is that if I provide dependencies in constructor, I still need to provide them somewhere. So it is just moving problem one level up. I know I can create factory for wiring all the dependencies, but touching different part of app will cause having different factories for each. I obviously can't refactor all the app at once and introduce e.g. Spring there.
Exposing setters (maybe with default implementations provided) is similar, moreover it is like adding code for tests only.
So my question is how you deal with that? How to make dependencies between objects better, more readable, testable without doing it in one go?
I just recently started reading "Working effectively with legacy code" by Michael Feathers.
The book is basically an answer to your very question. It presents very actionable "nuggets" and techniques to incrementally bring a legacy system under test and progressively improve the code base.
The navigation can be a little confusing, as the book references itself by pointing to specific techniques, almost from page 1, but I find that the content is very useful so far.
I am not affiliated with the author or anything like that, it's just that I am facing similar situations and found this a very interesting resource.
HTH
I'd try to establish a rule like the boy scout rule: When ever you touch a file you have to improve it a little, apart from implementing what ever you wanted to implement.
In order to support that you can
agree on a fixed time budget for such improvements like for 2 hours of feature work we allow 1 hour of clean up.
Have metrics visible showing the improvement over time. Often simple things like average file size and test coverage are sufficient
Have a list of things you want to change at least for the bigger stuff, like "Get rid of TelSystemConfiguratorFactory" track on which tasks you are already working and prefere working on things that already started over new things.
In any case make sure management agrees to your approach.
On the more technical side: The approach you showed is is good. In many cases I would consider a second constructor providing all the dependencies but through the new constructor with the parameters. Make the additional constructor deprecated. When you touch clients of that class make them use the new constructor.
If you are going with Spring (or some other DI Framework) you can start by replacing calls to static Factories with getting an instance from the Spring context as an intermediate step before actually creating it via spring and injecting all the dependencies.
I'm working on some user related tasks for a website. For cases when the person is registering or editing a user, they fill out a form and the request is handled in a servlet. At the moment, the servlet is taking all the request parameters and building a User object from them, like this:
User toRegister = new User(request.getParameter("name"),
request.getParameter("lastName"));
There's more parameters but you get the point.
So this sort of code is being reused in a bunch of different servlets (registering, admin adding user, user updating self, admin updating others etc) and it's kinda ugly, so I wanted to clean it up. The two alternatives I could think of were a constructor that takes the request object or a static method in the User class to create and return a new User based on the request.
It's not much of a question since I know they would both work but I couldn't find any sort of best practices for this situation. Should I keep to handling the requests individually in the servlets in case the forms change or should I implement one of the above methods?
DON'T add a c'tor that takes a Request as an argument. You only couple your User class to the Servlet API this way.
Instead use a Web Framework as #skaffman suggests. There are many of these, and it will make your life easier.
EDIT: If you refuse to learn a new framework you can at least use BeanUtils of some similar framework to do the data binding only. I do recommend the Web Framework option though.
Instead of coding all the business logic in the servlet, why dont you use basic MVC framework. Using a framework will make your coding and testing a lot easier.