Short Background
We have two servers (Windows Server 2008). ServerA is an IBM WebSphere Application Server and ServerB is an IIS 7 webserver that points to applications on ServerA. It currently works. We want to upgrade ServerB to Server 2012, but cannot do an in place upgrade, so we are installing it on a new server (ServerC) and replacing ServerB with it.
We cannot use Tomcat, and the original setup works properly (Internet <--> ServerB (WebServer) <--> ServerA (Application Server).
My questions are (All of these apply to what happens after we swap out ServerB with ServerC):
1) Is there a way to test if a webserver is correctly configured to serve the websphere apps? I think my biggest barrier is that we cannot use the server machine to browse to any sites (I believe it is a group policy...but again, I'm just a software dev and not as knowledgeable about server configurations and system administration). The applications that we can use on the server are very limited, but I have seen some things about using Snoop (which I do not know how to use, but could find out...but I don't think we are allowed to install it on the machine anyway.)
2) When I navigate to a site hosted on IIS that points to a WebSphere application that redirects me to a Login.jsp page, why is the browser trying to download the .jsp file instead of displaying it as a web page? I have not been able to find good google/stackoverflow/serverfault results on a search for why a site hosted on IIS pointing to a WebSphere application server does not display JSP pages, but instead prompts to download the .jsp file.
3) When I try to navigate to some sites hosted on IIS that points to a WebSphere application, why would I receive a 403 Access Denied error on the new IIS server, but not the old server? The folders that the web apps have access to are located on either the local machine (separate drive letter) and the WebSphere application server. All of the local folders on the new server have been configured the same way as on the old server, and all of the local users and groups are setup the same.
Setup Information (More Detailed)
In this part, I would like to show our setup: We have two servers (Windows Server 2008). ServerA is an IBM WebSphere Application Server and ServerB is an IIS 7 webserver. This setup was around before anyone that is currently working at my organization (including myself) started. There are 7 sites configured/setup on IIS with virtual paths (that is, the site is named www.site_name.ourorg.domain). We have an IP address configured on the outward facing NIC for each of the sites and each site has a binding to its specific ip address with port 80 and port 443 (with valid certificates) and their own application pools. We do not have access to configure the domain controller (we are given the IPs to use and someone at a different organization manages our domain server). All of the sites are currently in production and in use on a daily basis.
The Goal
Our goal is to stand up a new Windows Server 2012 webserver (and eventually application server as well). Unfortunately, we cannot do an in-place upgrade, so our System Admin decided that probably the best route would be to setup a new server (ServerC), do a clean install of Windows Server 2012, install IIS7 using the same features and roles that are on ServerB, install IBM WebSphere Plugins and use the same plugin-cfg.xml file. (Later on, when this failed, we reinstalled the WebSphere Plugins as well as the Configuration Tool and creating a new configuration using that, per the instructions in the WebSphere site noted below.) Then, once it is installed and everything appears to be configured the same, disable the outward facing NIC on the existing webserver (ServerB), rename it (since we use Active Directory) to a new name (ServerB-o), rename ServerC to ServerB, and enable the NIC on ServerC (now called ServerB) using the same IP and configuration as the old ServerB (ServerB-o).
The Issue
After we do all of this, we can access IIS (default page, which will be disabled after testing), and it looks like the sites pointing to WebSphere are responding to requests, but we are running into two issues:
1) Some of the sites are returning a 403 Access Denied; The application pools are running as ApplicationPoolIdentity and all of the ApplicationPools (IIS APPPOOL\www.site_name.ourorg.domain) are added to the IUSR group. One peculiartity is when we are setting up the sePlugins virtual folder (for example) and choosing "Connect As...", we cannot use .\localadmin nor localadmin (both are admin users on the webserver). It tells us that the account name or password are incorrect. The old server is configured like this, though.
2) For any site that does not give the 403 error, instead of displaying the translated .jsp page, the browser prompts to download the .jsp file.
Other Information and Attempts
After trying to change the configuration on IIS and the WebSphere plugin multiple times, using a service account (on our AD) instead of .\localadmin, and a few days of research, I have realized that I do not know enough about how to configure servers, especially in this setup, to be of any more help. We are able to do the reverse (disable NIC on new ServerB, rename it to ServerC, rename ServerB-o back to ServerB, and re-enable the NIC), the sites come back up after somewhere between 15 minutes and 3 hours...
I just remembered that there was a part where I had to compare the ApplicationHost.config files and found that the ISAPI filters were not properly set on the new server, but am pretty sure I got everything configured on the new IIS the same as on the old IIS. The only thing that didn't get installed was HipIISEngineStub.dll, which seems like a McAfee-related dll (host intrusion prevention). It is on the old webserver, but not the new.
We have tried standing up the new server 3 times, and I have done more research in between each issue and was able to resolve all of them but this one. Each time we try to stand up the new server, we have to take down production for the remainder of the day, so I would prefer to be able to find a way to test it without taking production down.
One More Note
One last note is the most recent thing I was able to do was setup the configuration on ServerC, leave the outward facing NIC disabled, create a new site using the same physical path and configuration setup, except that it binds all unassigned IP addresses and an unused port (let's say 11111, for example) to one of the apps. I added the sePlugins virtual directory to it, and tested it from another workstation on the same domain by going to https://ServerC:11111. That successfully redirected my to the https://www.site_name.ourorg.domain/app_sub/Login.jsp <- which is being served by the old machine. I don't really know what this test means, other than the new IIS being able to read the configuration file and perform the appropriate steps for redirecting.
Resources
When installing WebSphere on the new webserver, I followed the steps at IBM's Site.
I have seen countless resources for the other issues I had, such as adding the AppPools to the IUSR group, configuring an app pool to run as a specific identity, how having the multiple IPs on a NIC and have them bound to sites in IIS works, and other manner of sys admin stuff that I am not familiar with, nor fully grasp.
I would greatly appreciate any assistance with getting a new server setup to properly server jsp pages using WebSphere. Even if you have a resource for completely uninstalling and reinstalling WebSphere on the new machine. I am hesitant to make any configuration changes on the WebSphere Application server itself, since we can easily roll back to the using the old webserver and the sites come back up. However, I am open to suggestions if that is where the issue is.
Once again, I apologize for posting a question that seemed to have too large a scope. I was able to get in contact with IBM support. The short answer is that while I had many other configuration issues, there were two main items preventing me from successfully serving the websites.
First, I had installed only the application server (Base installation) instead of the Network Deployment installation. This meant that there were more steps needed so that the application server to serve multiple applications to the web server. This was resolved by following the steps in this tech note. It involved setting up one plugin_root\bin\AppName and one plugin_root\config\AppName folder (and optionally in the log folder as well) on the web server for each WebSphere application; as well as modifying the configuration file (plugin_root\bin\plugin-cfg.loc and plugin_root\config\plugin-cfg.xml) for each specific app. Then, I needed to remove the ISAPI filter entry at the server level (in IIS Manager), and add the entry to the ISAPI filter for each site. I also needed to change the permissions on the Handlers Mappings in the sePlugins virtual folder to allow Read and Execute (one of the main reasons for the 403 errors).
The second issue was that I needed to add the ports that I was using for the test sites to the virtual hosts list using the Administrative Console, regenerating the plugins, and copying them to the web server (to the appropriate folders).
After getting everything up and running (and taking a snapshot of the server), I uninstalled everything and reinstalled WebSphere using the instructions listed in the resources section of my question, except I installed and configured it using the Network Deployment installation. This meant that I could have the ISAPI filter set at the server level in IIS manager, and have one folder to hold the iisWASPlugin file and associated loc, config, and log files. It turns out that I needed to set the permissions in the Handler Mappings for the sePlugins folder for each app to Read, Script, and Execute (having it just Script and Execute did not work for our setup), as well as making sure the ports were added to the Virtual Hosts list (therefore adding them to the config file).
I hope this helps someone in the future.
If i have a webapp deployed to a specific server or host, and this webapp checks the health of the local server its deployed on when you call a get request on its URI.
The problem I am having is that if the server goes down, you can't even make a get request to the health webapp (webapp simply uses a URL object with the localhost and port path and gets the connection code).
Would creating a seperate JVM for the webapp solve this problem? or does the webapp need to be hosted entirely on another server?
i'm a beginner so have mercy.
The short answer is not feasible. you will need a third party vendor app to check your server from outside the host itself. i can give you many suggestion. but just try these first, datadog apm and nagios.
I think it's totally reasonable to use a small JSP for a health check of the server itself.
If you can't reach the JSP, it's a pretty extreme case of the server being in trouble.
If you can reach the JSP, you can query some mbeans to summarize a more detailed/nuance health.
We are using Apache Tomcat 8.0.18 as our web server.We are getting expected output when the client is sending about 5 to 8 concurrent requests.
But when the client is sending about 30 to 40 concurrent request , client is getting some unexpected error related to some packet loss while the request reaching the web server hosted in tomcat through Internet.
We are not facing the issue while we testing the application in our local environment.
We have examined the web server logs and we are seeing only part of the requests are reaching the web servers. We have installed the Tomcat 8.0.18 with default configuration.
Can any one please guide us whether we need to change any configuration in Tomcat level to resolve this kind of packet loss issue?
Thanks
Dinesh
I suggest that you should install a packet sniffer on the host where Tomcat is installed. Maybe the problem doesn't come from tomcat.
In the process of showing demoing some new Java code that accesses a local MarkLogic server, I ran into the following error. It pops up any time I try to either load a file, or access its metadata:
Only XML and JSON error messages supported by MarkLogic server.
This is getting triggered in calls to TextDocumentManager.readMetadata() and TextDocumentManager.read(). The code works fine on my machine but NOT on my supervisor's (he's the one seeing the error), which makes me think I tweaked something in the database configuration during development but didn't write it down. Unfortunately, I can't think of what that would be. Does anybody have any suggestions?
The message indicates that the server responded with an error without a Content-Type header declaring error content as JSON or XML.
Thus far, we've seen that Java exception only when the server was not initialized as a REST server.
So, please check your connection parameters. If in doubt, use an HTTP client like curl to make the equivalent request of the REST server to verify that the request is accepted.
If the REST server seems to be operational, you can also turn on error logging on the REST server to help debug the Java client.
To answer the followup question (StackOverFlow timed out on the initial answer):
There's a UI for creating a REST server in InfoStudio database configuration.
Go to port 8000 at the /appservices/ path.
Select the Database from the drop down and click Configure
Add a REST API Instance near the bottom of the page
There's also a REST interface for the admin user (not the REST admin user) to create REST instances on port 8002. For information about those services, please see
http://docs.marklogic.com/REST/client/service-management
Really strange issue has been occurring lately with two legacy Struts applications running on separate RedHat 5/Tomcat 6 servers. Some brief details:
App 1 is the front-facing application
App 2 is an ancillary application which serves as a file repository system
App 1 has an upload form which forwards to App 2
App 2 expects multipart/form-data to be part of the Content-Type when an upload occurs
Uploading will work fine for a while, but will all of a sudden fail. When I look in the logs, App 2 is reporting that the Content-Type is missing and as such, cannot process the upload request. Furthermore, once it goes missing, it doesn't reappear. All attempts to upload will fail from that point forward and what's even more odd is that the only way to remedy the issue is to restart Tomcat hosting App 1, not App 2.
Other Oddities
Code that implements the upload feature has not changed in over a year
Using Wireshark (tshark) to sniff TCP packets
The Content-Type properly populated on the HTTP Request being sent from App1
Although Wireshark reports a malformed packet, the Content-Type is present on the HTTP Request received on App2
Any ideas why this could be happening?
I would suspect there is some sort of state change on App1 which is causing it to no longer user the Content-Type header in requests to App2. Without seeing the code there is little more that anyone could tell you.