I am trying to handle the following issue while trying to automate login process with JMeter WebDriver Sampler to our web app, which requests an authoriaztion certificate for a user to log in.
After filling credentials and clicking the Login button, the following window is called:
dialog window. I assume this is an OS window that cant be aimed by Selenium/WebDriver Sampler script - or is it possilbe?
EDIT: I found some solution e.g. https://sqa.stackexchange.com/questions/7640/how-to-select-security-certificate-from-security-dialog bit I am kind of afraid of implement the recommended code to the script - isnt there another solution then via Selenium script?
I tried to set a certificate in jmeter's system.properties file:
system.properties keystore setting.
I supposed it makes SOMETHING, e.g. some error after launching script, but it ends on the exactly same step - dialog window with Certificate choosing offer. So I assume this is wrong place to set user authentication certificate.
How is it possible to handle this kind of login process? I guess it is necessary to set a default certificate that is paired with the user's credentials I am sending in the previous step in my script, but I dont know where.
What you set in JMeter's "system.properties" file only affects the client certificates for HTTP Request samplers.
WebDriver Sampler is a different beast, it uses Selenium libraries to automate the real browser hence you need to follow your browser documentation to learn how to automate the certificate selection process.
For example for
Chrome on Windows it's in registry
for Chrome on Linux/Unix it's under /etc/chromium/policies/managed folder
More information: AutoSelectCertificateForUrls
Up until last last week my selenium UI tests were running successfully to log into system using SSO (with the test account details that I pass in the URL). If i don't pass the details it use to fail the test as there were no details entered in the auth popup. This is expected behaviour as Selenium will launch a New clean profile which doesn't have any stored details.
I'm always logged into the system hence when I normally open the browser manually the default profile is launched which logs me in automatically using SSO.
Now when I open incognito mode and try to log into the Site it asks for sign in details since the incognito mode doesn't have my details stored. this is also the expected behaviour
Since yesterday I noticed when I run the selenium UI tests it is ignoring the details I pass in the URL and keeps logging in as myself. I can confirm that it is using the new clean profile. I always delete cookies before launching browser
driver.manage().deleteAllCookies();
no code change happened since 2 week, and the test started failing since yesterday, So i cannot blame the code. Also this happens on Chrome and new MS edge. I cannot confirm on firefox since our organization doesn't support it.
I tried running the tests on other developer machine and result is same, the test were passing on their machine before but not now. I checked with the Security team and they confirmed they made no changes to SSO since a long time. not sure how to debug this. below are the things i tried from my side
Changed the system password twice - no luck
restarted win twice (hard turn off and turn on) - no luck
test works in Jenkins as expected since this was no default SSO in jenkins server(win server) stored.
can anyone please help to solve or actually understand the issue.
Browser - Chrome 86.0.4240.198 and Edge 86.0.622.69
selenium - 3.141
JDK - 1.8.0
OS - Win 10
Short Background
We have two servers (Windows Server 2008). ServerA is an IBM WebSphere Application Server and ServerB is an IIS 7 webserver that points to applications on ServerA. It currently works. We want to upgrade ServerB to Server 2012, but cannot do an in place upgrade, so we are installing it on a new server (ServerC) and replacing ServerB with it.
We cannot use Tomcat, and the original setup works properly (Internet <--> ServerB (WebServer) <--> ServerA (Application Server).
My questions are (All of these apply to what happens after we swap out ServerB with ServerC):
1) Is there a way to test if a webserver is correctly configured to serve the websphere apps? I think my biggest barrier is that we cannot use the server machine to browse to any sites (I believe it is a group policy...but again, I'm just a software dev and not as knowledgeable about server configurations and system administration). The applications that we can use on the server are very limited, but I have seen some things about using Snoop (which I do not know how to use, but could find out...but I don't think we are allowed to install it on the machine anyway.)
2) When I navigate to a site hosted on IIS that points to a WebSphere application that redirects me to a Login.jsp page, why is the browser trying to download the .jsp file instead of displaying it as a web page? I have not been able to find good google/stackoverflow/serverfault results on a search for why a site hosted on IIS pointing to a WebSphere application server does not display JSP pages, but instead prompts to download the .jsp file.
3) When I try to navigate to some sites hosted on IIS that points to a WebSphere application, why would I receive a 403 Access Denied error on the new IIS server, but not the old server? The folders that the web apps have access to are located on either the local machine (separate drive letter) and the WebSphere application server. All of the local folders on the new server have been configured the same way as on the old server, and all of the local users and groups are setup the same.
Setup Information (More Detailed)
In this part, I would like to show our setup: We have two servers (Windows Server 2008). ServerA is an IBM WebSphere Application Server and ServerB is an IIS 7 webserver. This setup was around before anyone that is currently working at my organization (including myself) started. There are 7 sites configured/setup on IIS with virtual paths (that is, the site is named www.site_name.ourorg.domain). We have an IP address configured on the outward facing NIC for each of the sites and each site has a binding to its specific ip address with port 80 and port 443 (with valid certificates) and their own application pools. We do not have access to configure the domain controller (we are given the IPs to use and someone at a different organization manages our domain server). All of the sites are currently in production and in use on a daily basis.
The Goal
Our goal is to stand up a new Windows Server 2012 webserver (and eventually application server as well). Unfortunately, we cannot do an in-place upgrade, so our System Admin decided that probably the best route would be to setup a new server (ServerC), do a clean install of Windows Server 2012, install IIS7 using the same features and roles that are on ServerB, install IBM WebSphere Plugins and use the same plugin-cfg.xml file. (Later on, when this failed, we reinstalled the WebSphere Plugins as well as the Configuration Tool and creating a new configuration using that, per the instructions in the WebSphere site noted below.) Then, once it is installed and everything appears to be configured the same, disable the outward facing NIC on the existing webserver (ServerB), rename it (since we use Active Directory) to a new name (ServerB-o), rename ServerC to ServerB, and enable the NIC on ServerC (now called ServerB) using the same IP and configuration as the old ServerB (ServerB-o).
The Issue
After we do all of this, we can access IIS (default page, which will be disabled after testing), and it looks like the sites pointing to WebSphere are responding to requests, but we are running into two issues:
1) Some of the sites are returning a 403 Access Denied; The application pools are running as ApplicationPoolIdentity and all of the ApplicationPools (IIS APPPOOL\www.site_name.ourorg.domain) are added to the IUSR group. One peculiartity is when we are setting up the sePlugins virtual folder (for example) and choosing "Connect As...", we cannot use .\localadmin nor localadmin (both are admin users on the webserver). It tells us that the account name or password are incorrect. The old server is configured like this, though.
2) For any site that does not give the 403 error, instead of displaying the translated .jsp page, the browser prompts to download the .jsp file.
Other Information and Attempts
After trying to change the configuration on IIS and the WebSphere plugin multiple times, using a service account (on our AD) instead of .\localadmin, and a few days of research, I have realized that I do not know enough about how to configure servers, especially in this setup, to be of any more help. We are able to do the reverse (disable NIC on new ServerB, rename it to ServerC, rename ServerB-o back to ServerB, and re-enable the NIC), the sites come back up after somewhere between 15 minutes and 3 hours...
I just remembered that there was a part where I had to compare the ApplicationHost.config files and found that the ISAPI filters were not properly set on the new server, but am pretty sure I got everything configured on the new IIS the same as on the old IIS. The only thing that didn't get installed was HipIISEngineStub.dll, which seems like a McAfee-related dll (host intrusion prevention). It is on the old webserver, but not the new.
We have tried standing up the new server 3 times, and I have done more research in between each issue and was able to resolve all of them but this one. Each time we try to stand up the new server, we have to take down production for the remainder of the day, so I would prefer to be able to find a way to test it without taking production down.
One More Note
One last note is the most recent thing I was able to do was setup the configuration on ServerC, leave the outward facing NIC disabled, create a new site using the same physical path and configuration setup, except that it binds all unassigned IP addresses and an unused port (let's say 11111, for example) to one of the apps. I added the sePlugins virtual directory to it, and tested it from another workstation on the same domain by going to https://ServerC:11111. That successfully redirected my to the https://www.site_name.ourorg.domain/app_sub/Login.jsp <- which is being served by the old machine. I don't really know what this test means, other than the new IIS being able to read the configuration file and perform the appropriate steps for redirecting.
Resources
When installing WebSphere on the new webserver, I followed the steps at IBM's Site.
I have seen countless resources for the other issues I had, such as adding the AppPools to the IUSR group, configuring an app pool to run as a specific identity, how having the multiple IPs on a NIC and have them bound to sites in IIS works, and other manner of sys admin stuff that I am not familiar with, nor fully grasp.
I would greatly appreciate any assistance with getting a new server setup to properly server jsp pages using WebSphere. Even if you have a resource for completely uninstalling and reinstalling WebSphere on the new machine. I am hesitant to make any configuration changes on the WebSphere Application server itself, since we can easily roll back to the using the old webserver and the sites come back up. However, I am open to suggestions if that is where the issue is.
Once again, I apologize for posting a question that seemed to have too large a scope. I was able to get in contact with IBM support. The short answer is that while I had many other configuration issues, there were two main items preventing me from successfully serving the websites.
First, I had installed only the application server (Base installation) instead of the Network Deployment installation. This meant that there were more steps needed so that the application server to serve multiple applications to the web server. This was resolved by following the steps in this tech note. It involved setting up one plugin_root\bin\AppName and one plugin_root\config\AppName folder (and optionally in the log folder as well) on the web server for each WebSphere application; as well as modifying the configuration file (plugin_root\bin\plugin-cfg.loc and plugin_root\config\plugin-cfg.xml) for each specific app. Then, I needed to remove the ISAPI filter entry at the server level (in IIS Manager), and add the entry to the ISAPI filter for each site. I also needed to change the permissions on the Handlers Mappings in the sePlugins virtual folder to allow Read and Execute (one of the main reasons for the 403 errors).
The second issue was that I needed to add the ports that I was using for the test sites to the virtual hosts list using the Administrative Console, regenerating the plugins, and copying them to the web server (to the appropriate folders).
After getting everything up and running (and taking a snapshot of the server), I uninstalled everything and reinstalled WebSphere using the instructions listed in the resources section of my question, except I installed and configured it using the Network Deployment installation. This meant that I could have the ISAPI filter set at the server level in IIS manager, and have one folder to hold the iisWASPlugin file and associated loc, config, and log files. It turns out that I needed to set the permissions in the Handler Mappings for the sePlugins folder for each app to Read, Script, and Execute (having it just Script and Execute did not work for our setup), as well as making sure the ports were added to the Virtual Hosts list (therefore adding them to the config file).
I hope this helps someone in the future.
I'm wondering how i'd go about allowing a connection from a Java application totally bypass cloudflare for my site. I've disabled browser integrity checks for my RSS feed connections which has allowed those through, but whenever cloudflare is active, when clicking the 'Play Now' button to update the client, it'll go grey, as it should, then remain like that. No errors or 404/403 errors are printed upon it doing this, and the client will not download.
The only thing that totally resolves this is pausing cloudflare and fully disabling it for my site. I've tried adding these rules for the download url, none of which have solved it:
I think it's not possible to do it with the free plan with page rules.
Maybe you can do it by using a subdomain and disable the "Orange Cloud", so traffic will bypass any CloudFlare setting (CloudFlare will just give the IP of the server).
Or instead you can set "Security level: Off", but it's only for Enterprise Plan, and I'm not sure if you can make a Page Rule with it, because at least for the Free Plan (what I'm using) the "Off" value don't appear in the Page Rule config.
I am using Jmeter 2.13 version and with that I used to record many scripts earlier successfully without any issue . Now, my OS has been reinstalled and I am holding Windows 8.1 , 64 bit version.After re installation, I am not able to record HTTPS web applications even though my proxy configuration is correct. After I setup everything in Jmeter, and click on start from work bench and I navigate to the browser try to access the application, it shows "Server not found" message.
However, the scripts which I saved earlier are working fine without any issues. only the new recording is not working.
Help me with the possible solutions.
"Server not found" indicates that browser is unable to access Internet (or intranet).
Most likely you're sitting behind the corporate proxy server and in previous JMeter installation you had these proxy server details specified in system.properties file like:
http.proxyHost=10.20.30.40
http.proxyPort=3128
https.proxyHost=10.20.30.40
https.proxyPort=3128
Double check with your network administrator if this is the case, if yes - take steps from Using JMeter behind a proxy User Manual chapter.
You can also try out JMeter Chrome Extension as an alternative solution - in that case you don't have to worry about proxies and SSL certificates substitution.