I have a Liferay instance running on a URL like example.org/app. This instance does have a REST API that would normally be running under example.org/app/o/restpath.
The way the server running this instance is that the frontend is accessible without restrictions from the outside, however the REST API is only accessible from the inside the network under a URL like example.org/rest.
I need to make sure that it is impossible to access the REST API with example.org/app. I should also be impossible to access the frontend with example.org/rest. Does anybody have any suggestions?
There are tons of ways of doing that, the best one will depend on your stack, preferences and abilities.
A reverse proxy is the first that comes to mind, bearing in mind that is is normally better if your app has control of who can access it. So a wrapper or a filter checking who is accessing would help. But even then, is the filter to be put on the main application or on your module? That is an evaluation that needs to come from you.
You can also combine the proxy strategy, with a filter, just in case one day you are tuning up your proxy and let something through. You can also decide change your proxy server too..
Or your company already have a proxy that enables traffic going out, and would be easier if that proxy was to have access...
Your servlet contained might also be able to provide such control, so you do not actually need a proxy.
Although I would feel more comfortable if that kind of feature was in the application layer itself, like a wrapper for your component and that wrapper provides the service, a filter, or even a method in in the entry-point, while the others are just extra and to reduce load.
Some companies have networks devices that go up several layers of the network stack, those have lots of potential to help here too, IDS would be able to provide alarms, triggers and such...
As it stands, one would need more information to help you more, even in what you mean by "ensure" ( how far this assurance need to go, like are you thinking about passwords, certificates, IDS, or a simple approach like the mentioned ones ), but I guess that covers it.
Related
We generate our swagger/openAPI documents based on annotations in our java code. There are end points in the code that are there just for our internal usage. These are endpoints that we don't want accessible or even visible publicly.
It's possible, I guess that we could post process the swagger file and remove these endpoints, but that adds another step in the build process. What would really be nice is if could just tag them in such a way that the google cloud endpoint load-balancer saw the tag it would ignore the endpoint. Is such a thing possible?
I imagine we could do something like this by identifying them as requiring access, and then not configure anyone to have access, but I wondered if there was another path that might yield the same or better results.
Right now you can only manage who can access your endpoint with
API keys
Firebase authentication
Auth0
Service accounts
but this is only to authenticate, however what you want to accomplish is not possible at the moment, I could suggest you create a feature request. So the GCP team knows you are interested on the functionality and maybe implemented on the future.
I manage few spring based web applications. for example if my client is a flex application, with many modules/screens. Access to the screen or page or even a spring service is controlled by spring security based on the user role.
At certain time we may want to block access to that screen or service completely irrespective of the access granted by role. May be we want to take down a specific page/screen or a service for maintenance. and enable it after certain time. What is the best practice to achieve it. I do not want to restart the application.
I think of using some filter, so every request will pass through the filter and this filter will have the logic to check , if the current operation or view is allowed or disabled.
Is this the better way to handle it. or Is there any other solution.
What is the best practice.
Servlet filtler is a good choice if you want to block pages known by URL. This solution is simple and pretty straightforward.
Spring aspect will be better if you want to block services. Just wrap classes you would like to block and perform a check prior to calling it. Throw a specific exception that you can handle in the presentation layer.
We implemented a similar feature once in REST-based application. A global filter/aspect blocks all non-GET methods effectively switching an application to read-only mode.
You can always front your application with an apache httpd (or some other reverse-proxy web-front) and control access to individual URL-patterns there. That also gives you the added benefit that you can actually have a nice maitenance-page up while you take down the entire application.
I am working on an application in Java on Google App Engine where we have a concept of RPC, but when should we ideally make use of RPC? The same functionality that I am doing with RPC could be implemented even without it. So in what scenario should ideally RPC be designed.....?
You typically use RPC (or similar) when you need to access a service that is external to the current application.
If you are simply trying to call a method that is part of this application (i.e. in this application's address space) then it is unnecessary ... and wasteful to use RPC.
... when should we ideally make use of RPC?
When you need it, and not when you don't need it.
The same functionality that I am doing with RPC could be implemented even without it.
If the same functionality can be implemented without RPC, then it sounds like you don't need it.
So in what scenario should ideally RPC be designed.....?
When it is needed (see above).
A more instructive answer would be scenarios where there are good reasons to implement different functions of a "system" in different programs running in different address spaces and (typically) on different machines. Good reasons might include such things as:
insulating one part of a system from another
implementing different parts of a system in different languages
interfacing with legacy systems
interfacing with subsystems provided by third party suppliers; e.g. databases
making use of computing resources of other machines
providing redundancy
and so on.
It sounds like you don't need RPC in your application.
RPC is used whenever you need to access data from resources on the server that are not available on the client. For example web services, databases, etc.
RPC is designed for request/response. i.e. you have a self contained request to a service and you expect a response (return value or a success/failure status)
You can use it anywhere you might use a method call except the object you are calling is not local to the current process.
Let me exaplain you the complete situation currently I am stuck with in.
We are developing very much complex application in GWT and Hibernate, we are trying to host client and server code on different servers because of client's requirement. Now, I am able to achieve so using JNDI.
Here comes the tricky part, client need to have that application on different Platform also, database would be same and methods would be the same, lets say iPhone / .Net version of our application. we don't want to generate Server code again because it's gonna be the same for all.
I have tried for WebServices wrapper on the top of my server code but because of complexity of architecture and Classes dependencies I am not able to do so. For example, Lets consider below code.
class Document {
List<User>;
List<AccessLevels>;
}
Document class have list of users, list of accesslevels and lot more list of other classes and that other classes have more lists. Some important server methods takes Class (Document or any other) as input and return some other class in output. And we shouldn't use complex architecture in WebServices.
So, I need to stick with JNDI. Now, I don't know how can I access JNDI call to any other application ???
Please suggest ways to overcome this situation. I am open for technology changes that means JNDI / WebServices or any other technology that servers me well.
Thanking You,
Regards,
I have never seen JNDI used as a mechanism for request/response inter-process communication. I don't believe that this will be a productive line of attack.
You believe that Web Services are inappropriate when the payloads are complex. I disagree, I have seen many successful projects using quite large payloads, with many nested classes. Trivial example: Customers with Orders with Order Lines with Products with ... and so on.
It is clearly desirable to keep payload sizes small, there are serialization and network costs, big objects will be more expensive. But it's by far preferable to have one big request than lot's of little one. A "busy" interface will not perform well across a network.
I suspect that the one problem you may have is that certain of the server-side classes are not pure data, they refer to classes that only make sense on the server, you don't want those classes in you client.
I this case you need to build an "adapter" layer. This is dull work, but no matter what Inter-process communication technique you use you will need to do it. You need what I refer to as Data Transfer Objects (DTOs) - these represent payloads that are understood in client, using only classes reasonable for the client, and which the server can consume and create.
Lets suppose that you use technology XXX (JNDI, Web Service, direct socket call, JMS)
Client --- sends Document DTO --XXX---> Adapter transform DTO to server's Document
and similarly in reverse. My claim is that no matter what XXX is chosen you have the same problem, you need the client to work with "cut-down" objects that reveal none of the server's implementation details.
The adapter has responsibility for creating and understanding DTOs.
I find that working with RESTful Web Services using JAX/RS is very easy once you have a set of DTOs it's the work of minutes to create Web Services.
I have a web application running on Google App Engine (GAE) for JAVA. I'm authenticating the client at the Servlet layer but would like to make the client information available to my business and data layers without having to pass the client object through the arguments of every single function.
I'm considering setting up a "session" type object using ThreadLocal. That way any function can just say something like:
CurrentUser.getRoles();
Is this a good way to do this or is there something else that is a more accepted solution?
Thanks!
This will probably work and will be utterly convenient, but usually I try to avoid ThreadLocals for similar use cases as much as I can. Reasons:
You code just suddenly starts to depend on the fact that underlying container uses different threads for different users. If the container will start using NIO, different type of threads (e.g. green threads which would not be mapped into java.lang.Thread on some exotic JVM), etc. you will be out of luck.
ThreadLocals tend to be forgotten to be cleaned up after using them. So if your server will spike in usage and one of the users will put lots of stuff into 'cache', you might run out of RAM.
As a consequence of not cleaning up after a request, ThreadLocal can expose security vulnerability assuming the other user would jump unto the same thread.
Finally, I believe ThreadLocals were designed for environments where you have an absolute control over threads in your context, and this use case is just so far beyond.
Unfortunately I don't know much about GAE to suggest a viable alternative, sorry about that!
ThreadLocals are a completely accepted way to store such information. Besides us I also know from Alfresco that they do it.
If using Spring and Spring Security works for you then you can use the code I've built as part of jappstart for your authentication/authorization. This information is then available via Spring Security.