Authenticating users via LDAP with Shiro - java

Total newbie to java/groovy/grails/shiro/you-name-it, so bear with me. I have exhausted tutorials and all the "Shiro LDAP" searches available and still cannot get my project working.
I am running all of this on GGTS with jdk1.7.0_80, Grails 2.3.0, and Shiro 1.2.1.
I have a working project and have successfully ran quick-start-shiro,which built the domains ShiroRole and ShiroUser, the controller authController, the view login.gsp, and the relam ShiroDbRealm. I created a faux user in BootStrap with
def user = new ShiroUser(username: "user123", passwordHash: new Sha256Hash("password").toHex())
user.addToPermissions("*:*")
user.save()
and can successfully log into my homepage, and for all intents and purposes, that is as far as I have gotten. I cannot find a top-down tutorial of how to now log in with my username and password (authenticated through a LDAP server that I have available). From what I understand, I need to create a shiro.ini file and include something along the lines of
[main]
ldapRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm
ldapRealm.url = ldap://MYURLHERE/
However I don't even know where to put this shiro.ini file. I've seen /src/main/resources, but there is no such directory. Do I manually create this or is it some script creation?
The next step seems to be creating the SecurityManager which reads the shiro.ini somehow with code along the lines of
Factory<org.apache.shiro.mgt.SecurityManager> factory = new IniSecurityManagerFactory("actived.ini");
// Setting up the SecurityManager...
org.apache.shiro.mgt.SecurityManager securityManager = factory.getInstance();
SecurityUtils.setSecurityManager(securityManager);
However this always appears in some Java file in tutorials, but my project is a Groovy project inside of GGTS. Do I need to create a Java file and put it in src/java or something like that?
I've recently found that I may need a ShiroLdapRealm file (similar to ShiroDbRealm) with information like
def appConfig = grailsApplication.config
def ldapUrls = appConfig.ldap.server.url ?: [ "ldap://MYURLHERE/" ]
def searchBase = appConfig.ldap.search.base ?: ""
def searchUser = appConfig.ldap.search.user ?: ""
def searchPass = appConfig.ldap.search.pass ?: ""
def usernameAttribute = appConfig.ldap.username.attribute ?: "uid"
def skipAuthc = appConfig.ldap.skip.authentication ?: false
def skipCredChk = appConfig.ldap.skip.credentialsCheck ?: false
def allowEmptyPass = appConfig.ldap.allowEmptyPasswords != [:] ? appConfig.ldap.allowEmptyPasswords : true
and the corresponding info in Config along the lines of
ldap.server.url = ["ldap://MYRULHERE/"]
ldap.search.base = 'dc=COMPANYNAME,dc=com'
ldap.search.user = '' // if empty or null --> anonymous user lookup
ldap.search.pass = 'password' // only used with non-anonymous lookup
ldap.username.attribute = 'AccountName'
ldap.referral = "follow"
ldap.skip.credentialsCheck = false
ldap.allowEmptyPasswords = false
ldap.skip.authentication = false
But putting all these pieces together hasn't gotten me anywhere! Am I at least on the right track? Any help would be greatly appreciated!

For /src/main/resources it will automatically created for you if you used maven for your project. Moreover, you can also create that directory manually.

Related

Gradle removing comments and reformatting properties file

When I am trying to edit a property within Gradle it re-formats my entire properties file and removes the comments. I am assuming this is because of the way Gradle is reading and writing to the properties file. I would like to just change a property and leave the rest of the properties file untouched including leaving the current comments in place and order of the values. Is this possible to do using Gradle 5.2.1?
I have tried to just use setProperty (which does not write to the file), used a different writer: (versionPropsFile.withWriter { versionProps.store(it, null) } )
and tried a different way to read in the properties file: versionProps.load(versionPropsFile.newDataInputStream())
Here is my current Gradle code:
File versionPropsFile = file("default.properties");
def versionProps = new Properties()
versionProps.load(versionPropsFile.newDataInputStream())
int version_minor = versionProps.getProperty("VERSION_MINOR")
int version_build = versionProps.getProperty("VERSION_BUILD")
versionProps.setProperty("VERSION_MINOR", 1)
versionProps.setProperty("VERSION_BUILD", 2)
versionPropsFile.withWriter { versionProps.store(it, null) }
Here is a piece of what the properties file looks like before gradle touches it:
# Show splash screen at startup (yes* | no)
SHOW_SPLASH = yes
# Start in minimized mode (yes | no*)
START_MINIMIZED = no
# First day of week (mon | sun*)
# FIRST_DAY_OF_WEEK = sun
# Version number
# Format: MAJOR.MINOR.BUILD
VERSION_MAJOR = 1
VERSION_MINOR = 0
VERSION_BUILD = 0
# Build value is the date
BUILD = 4-3-2019
Here is what Gradle does to it:
#Wed Apr 03 11:49:09 CDT 2019
DISABLE_L10N=no
LOOK_AND_FEEL=default
ON_MINIMIZE=normal
CHECK_IF_ALREADY_STARTED=YES
VERSION_BUILD=0
ASK_ON_EXIT=yes
SHOW_SPLASH=yes
VERSION_MAJOR=1
VERSION_MINOR=0
VERSION_BUILD=0
BUILD=04-03-2019
START_MINIMIZED=no
ON_CLOSE=minimize
PORT_NUMBER=19432
DISABLE_SYSTRAY=no
This is not a Gradle issue per se. The default Properties object of Java does not preserve any layout/comment information of properties files. You can use Apache Commons Configuration, for example, to get layout-preserving properties files.
Here’s a self-contained sample build.gradle file that loads, changes and saves a properties file, preserving comments and layout information (at least to the degree that is required by your example file):
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'org.apache.commons:commons-configuration2:2.4'
}
}
import org.apache.commons.configuration2.io.FileHandler
import org.apache.commons.configuration2.PropertiesConfiguration
import org.apache.commons.configuration2.PropertiesConfigurationLayout
task propUpdater {
doLast {
def versionPropsFile = file('default.properties')
def config = new PropertiesConfiguration()
def fileHandler = new FileHandler(config)
fileHandler.file = versionPropsFile
fileHandler.load()
// TODO change the properties in whatever way you like; as an example,
// we’re simply incrementing the major version here:
config.setProperty('VERSION_MAJOR',
(config.getProperty('VERSION_MAJOR') as Integer) + 1)
fileHandler.save()
}
}

How to store particular value from the JDBC Request response in the Custom Property using Script Assertion - SoapUI?

In my testSuite, there are four requests have been added and the last step is JdbcRequest.
After running this JdbcRequest step, I'm trying to get the phone number from the response. For that I have written the following script in the Script Assertion of JdbcRequest step.
import groovy.util.*
import groovy.lang.*
import com.eviware.soapui.model.testsuite.*
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def responseHolder = groovyUtils.getXmlHolder( context.responseAsXml )
def pNo = responseHolder.getNodeValue("//*:Results/*:ResultSet/*:Row/*:PHONE_NUMBER")
log.info pNo
testRunner.testCase.setPropertyValue("JdbcPhoneNo",pNo) // Not storing in the property
I will execute the three requests using the Groovy Script i.e. first step.
After completion of the execution (JdbcRequest), its not storing the phone number in Script Assertion and its showing as NULL. I tried the following ways but no luck.
//def x = messageExchange.modelItem.testStep.testCase.setPropertyValue("JdbcPhoneNo",pNo)
//context.testCase.project.setPropertyValue("JdbcPhoneNo",pNo)
//context.testCase.testSuite.setPropertyValue("JdbcPhoneNo",pNo)
//testRunner.testCase.testSuite.project.setPropertyValue("JdbcPhoneNo",pNo)
Your suggestion please.
Thanks
You're almost there....
The line below is a set command, and is the one to use, but you're assigning it to a var....
//def x = messageExchange.modelItem.testStep.testCase.setPropertyValue("JdbcPhoneNo",pNo)
Instead, change it to
messageExchange.modelItem.testStep.testCase.setPropertyValue("JdbcPhoneNo",pNo)
I got the answer for the above issue.
I used like this
def responseHolder = groovyUtils.getXmlHolder( messageExchange.responseContent )
instead of def responseHolder = groovyUtils.getXmlHolder( context.responseAsXml )
The value is storing into the property for each execution.

How to configure SparkContext for a HA enabled Cluster

When I am trying to run the spark application in YARN mode using the HDFS file system it works fine when I provide the below properties.
sparkConf.set("spark.hadoop.yarn.resourcemanager.hostname",resourcemanagerHostname);
sparkConf.set("spark.hadoop.yarn.resourcemanager.address",resourcemanagerAddress);
sparkConf.set("spark.yarn.stagingDir",stagingDirectory );
But the problems with this are:
Since my HDFS is NamdeNode HA enabled it won't work when I provide spark.yarn.stagingDir the commons URL of hdfs
E.g. hdfs://hdcluster/user/tmp/ gives an error that says:
has unknown host hdcluster
But it works fine when I give the URL as hdfs://<ActiveNameNode>/user/tmp/, but we don't know in advance which will be active so how do I resolve this?
And few things I have noticed are SparkContext takes the Hadoop configuration but SparkConfiguration class won't have any methods to accepts Hadoop configuration.
How do I provide the resource Manager address when Resource Manager are running in HA?
You need to use the configuration parameters that are already present in hadoop config files like yarn-site.xml, hdfs-site.xml
Initialize the Configuration object using:
val conf = new org.apache.hadoop.conf.Configuration()
To check the current HDFS URI, use:
val currentFS = conf.get("fs.defaultFS");
You will get an output with the URI of your namenode, something like:
res0: String = hdfs://namenode1
To check the address of current resource manager in use, try:
val currentRMaddr = conf.get("yarn.resourcemanager.address")
I have had the exact same issue. Here is the solution (finally):
You have to configure the internal Spark Context Hadoop Configuration for HDFS HA. When instantiating the Spark Context or Spark Session, it will find all configurations which have keys starting with spark.hadoop. and use them in instantiate the Hadoop Configuration.
So, In order to be able to use hdfs://namespace/path/to/file and not get an Invalid Host Exception is to add the following configuration options
spark.hadoop.fs.defaultFS = "hdfs://my-namespace-name"
spark.hadoop.ha.zookeeper.quorum = "real.hdfs.host.1.com:2181,real.hdfs.host.2.com:2181"
spark.hadoop.dfs.nameservices = "my-namespace-name"
spark.hadoop.dfs.client.failover.proxy.provider.my-namespace-name = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
spark.hadoop.dfs.ha.automatic-failover.enabled.my-namespace-name = true
spark.hadoop.dfs.ha.namenodes.my-namespace-name = "realhost1,realhost2"
spark.hadoop.dfs.namenode.rpc-address.my-namespace-name.realhost1 = "real.hdfs.host.1.com:8020"
spark.hadoop.dfs.namenode.servicerpc-address.my-namespace-name.realhost1 = "real.hdfs.host.1.com:8022"
spark.hadoop.dfs.namenode.http-address.my-namespace-name.realhost1 = "real.hdfs.host.1.com:50070"
spark.hadoop.dfs.namenode.https-address.my-namespace-name.realhost1 = "real.hdfs.host.1.com:50470"
spark.hadoop.dfs.namenode.rpc-address.my-namespace-name.realhost2 = "real.hdfs.host.2.com:8020"
spark.hadoop.dfs.namenode.servicerpc-address.my-namespace-name.realhost2 = "real.hdfs.host.2.com:8022"
spark.hadoop.dfs.namenode.http-address.my-namespace-name.realhost2 = "real.hdfs.host.2.com:50070"
spark.hadoop.dfs.namenode.https-address.my-namespace-name.realhost2 = "real.hdfs.host.2.com:50470"
spark.hadoop.dfs.replication = 3
spark.hadoop.dfs.blocksize = 134217728
spark.hadoop.dfs.client.use.datanode.hostname = false
spark.hadoop.dfs.datanode.hdfs-blocks-metadata.enabled = true
You are probably looking at HADOOP_CONF_DIR=/path/to/hdfs-site.xml/and/core-site.xml property in spark-env.sh. The mentioned envioronment variable should point to location where hdfs-site.xml and core-site.xml exists (Same those used in starting hadoop HA cluster). You should be able to then use hdfs://namespace/path/to/file without issues

Apache-Zeppelin / Spark : Why can't I access a remote DB with this code sample

I am doing my first own steps with Spark and Zeppelin and don't understand why this code sample isn't working.
First Block:
%dep
z.reset() // clean up
z.load("/data/extraJarFiles/postgresql-9.4.1208.jar") // load a jdbc driver for postgresql
Second Block
%spark
// This code loads some data from a PostGreSql DB with the help of a JDBC driver.
// The JDBC driver is stored on the Zeppelin server, the necessary Code is transfered to the Spark Workers and the workers build the connection with the DB.
//
// The connection between table and data source is "lazy". So the data will only be loaded in the case that an action need them.
// With the current script means this the DB is queried twice. ==> Q: How can I keep a RDD in Mem or on disk?
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.JdbcRDD
import java.sql.Connection
import java.sql.DriverManager
import java.sql.ResultSet
import org.apache.spark.sql.hive._
import org.apache.spark.sql._
val url = "jdbc:postgresql://10.222.22.222:5432/myDatabase"
val username = "postgres"
val pw = "geheim"
Class.forName("org.postgresql.Driver").newInstance // activating the jdbc driver. The jar file was loaded inside of the %dep block
case class RowClass(Id:Integer, Col1:String , Col2:String) // create a class with possible values
val myRDD = new JdbcRDD(sc, // SparkContext sc
() => DriverManager.getConnection(url,username,pw), // scala.Function0<java.sql.Connection> getConnection
"select * from tab1 where \"Id\">=? and \"Id\" <=? ", // String sql Important: we need here two '?' for the lower/upper Bounds vlaues
0, // long lowerBound = start value
10000, // long upperBound, = end value that is still included
1, // int numPartitions = the area is spitted into x sub commands.
// e.g. 0,1000,2 => first cmd from 0 ... 499, second cmd from 500..1000
row => RowClass(row.getInt("Id"),
row.getString("Col1"),
row.getString("Col2"))
)
myRDD.toDF().registerTempTable("Tab1")
// --- improved methode (not working at the moment)----
val prop = new java.util.Properties
prop.setProperty("user",username)
prop.setProperty("password",pw)
val tab1b = sqlContext.read.jdbc(url,"tab1",prop) // <-- not working
tab1b.show
So what is the problem.
I want to connect to an external PostgreSql DB.
Block I is adding the necessary JAR file for the DB and first lines of the second block is already using the JAR and it is able get some data out of the DB.
But the first way is ugly, because you have to convert the data by your own into a table, so I want to use the easier method at the end of the script.
But I am getting the error message
java.sql.SQLException: No suitable driver found for
jdbc:postgresql://10.222.22.222:5432/myDatabase
But it is the same URL / same login / same PW from the above code.
Why is this not working?
Maybe somebody has a helpful hint for me.
---- Update: 24.3. 12:15 ---
I don't think the loading of the JAR is not working. I added an extra val db = DriverManager.getConnection(url, username, pw); for testing. (The function that fails inside of the Exception) And this works well.
Another interesting detail. If I remove the %dep block and class line, produces the first block a very similar error. Same Error Message; same function + line number that is failing, but the stack of functions is a bit different.
I have found the source code here: http://code.metager.de/source/xref/openjdk/jdk8/jdk/src/share/classes/java/sql/DriverManager.java
My problem is in line 689. So if all parameters are OK , maybe it comes from the isDriverAllowed() check ?
I ve had the same problem with dependencies in Zeppelin, and I had to add my jars to the SPARK_SUBMIT_OPTIONS in zeepelin-env.sh to have them included in all notebooks and paragraphs
SO in zeppelin-env.sh you modify SPARK_SUBMIT_OPTIONS to be:
export SPARK_SUBMIT_OPTIONS="--jars /data/extraJarFiles/postgresql-9.4.1208.jar
Then you have to restart your zeppelin instance.
In my case while executing a spark/scala code, I received the same error. I had previously set SPARK_CLASSPATH in my spark-env.sh conf file - it was pointing to a jar file. I removed/commented out the line in spark-env.sh and restarted zepplin. This got rid of the error.

Getting InvalidConfigurationException in JGit while pulling remote branch

I'm trying to pull the remote master branch in my currently checked out local branch. Here's the code for it
checkout.setName(branchName).call();
PullCommand pullCommand = git.pull();
System.out.println("Pulling master into " + branchName + "...");
StoredConfig config = git.getRepository().getConfig();
config.setString("branch", "master", "merge", "refs/heads/master");
pullCommand.setRemote("https://github.com/blackblood/TattooShop.git");
pullCommand.setRemoteBranchName("master");
pullResult = pullCommand.setCredentialsProvider(credentialsProvider).call();
When I run the code I get the following error on this line pullCommand.setRemote("https://github.com/blackblood/TattooShop.git");
Error :
org.eclipse.jgit.api.errors.InvalidConfigurationException:
No value for key remote.https://github.com/blackblood/TattooShop.git.url found in configurationCouldn't pull from remote. Terminating...
at org.eclipse.jgit.api.PullCommand.call(PullCommand.java:247)
at upload_gen.Launcher.updateFromRemote(Launcher.java:179)
at upload_gen.Launcher.main(Launcher.java:62)
Following are the contents of my .git/config file
[core]
repositoryformatversion = 0
filemode = false
bare = false
logallrefupdates = true
symlinks = false
ignorecase = true
hideDotFiles = dotGitOnly
[remote "origin"]
url = https://github.com/blackblood/TattooShop.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
[remote "heroku"]
url = git#heroku.com:tattooshop.git
fetch = +refs/heads/*:refs/remotes/heroku/*
This seems to be a bug in JGit. According to the JavaDoc of setRemote(), it sets the remote (uri or name) to be used for the pull operation but apparently only the remote name works.
Given your configuration you can work around the issue by using the remote name like this:
pullCommand.setRemote( "origin" );
I recommend to open a bug report in the JGit bugzilla so that this gets fixed in future versions of JGit.

Categories

Resources