Is there any way to get Tomcat upload and download traffic using Java and JMX?
Tomcat version = ?
If you ask about the count of bytes transferred, then yes. The detailed status page in the Manager web application shows that information and it obtains it via JMX.
You can look into org.apache.catalina.manager package classes StatusManagerServlet and StatusTransformer for the actual source code.
If you ask about transfer rate, if I remember correctly there is no such information. It can also be defined in different ways, as it differs across clients.
You can write your own Filter or Valve or AccessLogValve to perform such calculations and expose via JMX.
You can also analyze an access log file.
Related
Background Context:
Due to enterprise limitations, an uncooperative 3rd party vendor, and a lack of internal tools, this approach has been deemed most desirable. I am fully aware that there are easier ways to do this, but that decision is a couple of pay grades away from my hands, and I'm not about to fund new development efforts out of my own pocket.
Problem:
We need to send an internal file to an external vendor. The team responsible for these types of files only transfers with SFTP, while our vendor only accepts files via REST API calls. The idea we came up with (considering the above constraints) was to use our OpenShift environment to host a "middle-man" SFTP server (running from a jar file) that will hit the vendor's API after our team sends it the file.
I have learned that if we want to get SFTP to work with OpenShift we need to set up of our cluster and pods with an ingress/external IP. This looks promising, but due to enterprise bureaucracy, I'm waiting for the OpenShift admins to make the required changes before I can see if this works, and I'm running out of time.
Questions:
Is this approach even possible with the technologies involved? Am I on the right track?
Are there other configuration options I should be using instead of what I explained above?
Are there any clever ways in which an SFTP client can send a file via HTTP request? So instead of running an embedded SFTP server, we could just set up a web service instead (this is what our infrastructure supports and prefers).
References:
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-externalip.html
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.html#configuring-ingress-cluster-traffic-service-external-ip
That's totally possible, I have done it in the past as well with OpenShift 3.10. The approach to use externalIPs is the right way.
Please suggest what are different ways of achieving load balance on database while more than one tomcat is accessing the same database?
Thanks.
This is a detailed example of using multiple tomcat instances and an apache based loadbalancing control
Note, if you have a hardware that would make a load balancing, its even more preferable way as to me (place it instead of apache).
In short it works like this:
A request comes from some client to apache web server/hardware loadbalancer
the web server determines to which node it wants to redirect the request for futher process
the web server calls Tomcat and tomcat gets the request
the Tomcat process the request and sends it back.
Regarding the database :
- tomcat itself has nothing to do with your Database, its your application that talks to DB, not a Tomcat.
Regardless your application layer you can establish a cluster of database servers (For example google for Oracle RAC, but its entirely different story)
In general, when implementing application layer loadbalancing please notice that the common state of the application gets replicated.
The technique called "sticky session" partially handles the issue but in general you should be aware of it.
Hope this helps
This question is kind of related to our web application and it is bugging me from last few months. So we use linux server for database, application and we have our custom built java web server. If we do any change in source code of application, we build a new jar file and replace the existing jar file with new jar file. Now update to take place in live application, we just execute a HTML file which contains this kind of code :
<frameset rows="100%"?
<frame src="http://mydomain.com:8001/RESTART">
</frameset>
How does this opening of port make the application to use new jar file?
The webserver is instructed to give the /RESTART URL special treatment. This can either be through a mapping to a deployed servlet, or through a hardcoded binding to a web container action.
It is very common to have URLs with special meaning (usually protected by a password) allowing for remote maintainance, but there is no common rule set. You can see snapshots of the Tomcat Administration console at http://linux-sxs.org/internet_serving/c516.html
EDIT: I noticed you mentioned a "custom built web server". If this web server does not provide servlets or JSP's - in other words conforms to the Servlet API - you may consider raising the flag about switching to a web server which do.
The Servlet API is a de-facto industry standard which allows you to cherry-pick from a wide array of web servers from the smallest for embedded devices to the largest enterprise servers spreading over multiple physical machines, without changing your code. This means that the hard work of making your application scale has been done by others. In addition they probably even made the web server as fast as possible, and if not, you can pick another where they did.
You're sending an HTTP GET to whatever's listening on that port (presumably your web server). The servlet spec supports pre- and post-request filters, so the server may have one set up to capture this particular request and handle it in a special fashion.
In our Java web application, customer wants to upload some large files to a SFTP server and download directly from there. The customers do not want to use any third party tool rather they want this functionality in the application itself.
The file upload part has been taken care of by the JFileUpload applet component & libraries. Once the file gets uploaded I could figure out the exact location of the stored file. And that uploaded file will be shown to the users as a link which they will click to download (like an HTTP or FTP file link).
So I've to decide the strategy for downloading the file from the SFTP servers.
One option is to parse the request, then connect with the SFTP server and stream the file via HTTP server. But here the file will be downloaded over HTTP rather SFTP and moreover it will not serve the purpose of using SFTP.
Another option which I could think of is via an applet, again like upload. As soon as the request for the SFTP file comes to the HTTP server, it will launch a page containing an applet having a directory browser for users to decide the save path. Once the user selects the save location, the file will automatically start downloading to that location from the SFTP server. In this way the connection will be completely SFTP.
I want to know how much feasible the second approach is and if there are any important things I'll have to take care of. Which SFTP libraries are the best to use for this type of operations?
Moreover, please let me know if there are other better options to do mentioned activity.
Edit
It seems this post looks like a request for suggestion on ways to download from SFTP server (may be from the heading but I could not think of any other heading!!). Thank you for the suggestions on the APIs to do that but the more important issue for us is to figure out a way where a user's request to download a file from SFTP server is done over secure SSH rather than over HTTP. Now using the mentioned APIs we could very well download the files from the SFTP server to the HTTP server's filesystem but after that if we have to redirect the same file to the user's machine we have to use HTTP and that is what we want to avoid.
Our second thought approach of using a page with an applet which will initiate a SFTP session between user's client and SFTP server is to address the above concern.
How difficult will it be to implement and what should be our approach in this regard?
And if there is any other better & easier way to do the same task then please suggest.
I favor Commons-VFS for this kind of thing. It abstracts out the actual file system type and lets you work with a standard interface regardless of the underlying implementation. It in turn depends on other libraries for the actual systems, in particular JScsh for SFTP.
I recommend using JSch, Java Secure Channel. It is a pure Java implementation of SSH2. It has good examples for doing SFTP in addition to pretty much every other SSH2 option (XForwarding, port forwarding, etc.). We use it in a number of our projects, and have not had any issues. I have even tied it's GSS-API (Kerberos) support into a native Kerberos implementation and it worked well. It is BSD licensed, so commercial or not, you shouldn't have much issues with licensing.
I see building an applet using JSch to be pretty simple. Biggest issue will be to make sure your applet is signed and has permissions to write/read local files and connect to the SSH servers in question.
The customer is always right, so while the requirement screams bad architecture to me, I'll just extend my sympathy on that and try to help you with the problem.
The applet approach is OK, but seems kind of clunky for a web app. There are javascript sftp libraries out there. This one supports sftp and will give a much more natural feel to a web application than poping up an applet just for the sake of providing a file transfer. It isn't free, but it isn't that pricy either. It still uses an applet under the hood to effect the file transfer, it just doesn't present a java screen to the user.
Did you mean SFTP or FTPS (FTP over SSL)?
If you realy ment SFTP, have a look here: http://www.spindriftpages.net/blog/dave/2007/11/27/sshtools-j2ssh-java-sshsftp-library/comment-page-1/
I have a Java web application designed to be deployed on the internet. It needs a database connection. Depending upon hosting environments it may not be possible for the deployers of the web application to configure appropriate data sources so the application needs to store it's database connection information somewhere to be reloaded if the application is restarted.
I also need to give one user administrator privilileges. If this is just the first user account created there is the small possibility that the admin account could be hijacked in between the time that the application is deployed and the time that the installer logs in.
I need to do both of these tasks securely and in a way that is lowest common denominator for a web application.
Clarification: Upon first use I want the application to set up an admin user. That admin user will have all security access in the system. Somehow I need to determine what that user is called and what their password will be. If the application gets deployed on a shared web host the application will be live from the moment it is deployed. If I allow the site to be configured through a web interface there is the small possibility that an unauthorised person will do the web configuration before the site owner effectively hijacking the site. I am looking for an elegant way to avoid this.
Ok, to answer your revised question...
There isn't really that much you can do. If you don't want the admin to configure their account during installation on the server, then there will always be a small window where someone else might create it via the web before they do.
All the solutions involve modifying something on the server (as this is how they prove they are the real admin). Yes, that can mean a config file...
Upon first connect, give the user a
token. Basically a hash of some
salt+theirIP+theirUserAgent, etc.
Then ask them to log into the server
and feed this token to your app,
probably in a config file. If the
generated token next time matches the
one in the config, allow them to
proceed.
A simpler solution is to let them put
their IP address in the config from
the start, and just allow this IP. (Assumes they know what their IP address is)
Alternatively, allow account
creation, but refuse to do anything
else until some file is removed from
the server. Many PHP apps do this
with an install.php, but the file
could be anything you test for.
The most common way to do this is through a static configuration file, in some simple text format.
The file resides on the same system as the application, and should be just as secure as the code (eg. if someone has access to modify the configuration who shouldn't be able to, couldn't they just as easily modify the code?)
For one of our Java web apps, we're using Spring dependency injection to configure most of the app. If you create a "Configuration" class with all of the configurable properties exposed, you can wire up a bean in Java that is configured via Spring XML context file. You can then create different versions of the XML file for your different environments, and have them automatically built into specific packages, which can be deployed all-at-once. If you want to go all-out, you can basically configure every single class in your application using Spring, which is really useful.
There's a little bit of overhead to get Spring setup, but it's actually not too hard, there are plenty of tutorials out there.