Expand AD Group using Scala - java

I am trying to expand a bunch of AD Groups using Scala. Based on the code given here
http://www.thetekblog.com/2010/06/active-directory-with-ldap-retrieving-all-members-of-a-group/
I wrote the following code
package com.abhi
import java.util
import javax.naming.ldap._
import javax.naming._
import java.util.Hashtable
import javax.naming.directory.{SearchControls, SearchResult}
object LDAPScala extends App {
val base = "ou=Foo,dc=MYCOMPANY,dc=COM"
val env = new util.Hashtable[String, String]()
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory")
env.put(Context.SECURITY_AUTHENTICATION, "simple")
env.put(Context.SECURITY_PRINCIPAL, "foo#mycompany.com")
env.put(Context.SECURITY_CREDENTIALS, "Bar")
env.put(Context.PROVIDER_URL, "ldap://ldapserver.mycompany.com:389")
val groupList = List("Group1", "Group2", "Group3")
try {
val ctx = new InitialLdapContext(env, null)
val searchCtls = new SearchControls()
searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE)
val attributes = Array("member","memberof")
searchCtls.setReturningAttributes(attributes);
for {
group <- groupList
} {
val searchFilter = s"(&(objectCategory=group)(name=${group}))"
val answers = ctx.search(base, searchFilter, searchCtls)
while(answers.hasMoreElements) {
val answer = answers.next()
val attributes = answer.getAttributes.getAll
while(attributes.hasMore) {
val attr = attributes.nextElement()
val everyone = attr.getAll
while(everyone.hasMore) {
val person = everyone.next()
println(person)
}
}
}
}
} catch {
case e : Exception =>
println(e.getMessage)
println(e.getStackTrace)
}
}
This code works and I can see a list of users in each group like this
CN=User1,OU=Users,OU=Accounts,OU=tor,OU=CA,OU=AMER,OU=Regions,DC=FOO,DC=COM
CN=User2,OU=Users,OU=Accounts,OU=LON,OU=UK,OU=EMEA,OU=Regions,DC=FOO,DC=COM
CN=User3,OU=Users,OU=Accounts,OU=pla,OU=US,OU=AMER,OU=Regions,DC=FOO,DC=COM
Three questions
I needed the login IDs (I think they are called samAccountNames). but here the CN contains the actual names of people not their login ids.
Will this give me all the members? I remember that AD had some type of limitation where it will truncate the number of users in the group if there are too many users.
I don't know if my code above will work if there are groups within groups.

I was able to convert the CNs to samAccountNames. The final code is
package com.abhi
import java.util
import javax.naming.ldap._
import javax.naming._
import javax.naming.directory.SearchControls
import scala.collection.mutable.ArrayBuffer
object LDAPScala extends App {
val base = "dc=FOO,dc=COM"
val env = new util.Hashtable[String, String]()
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory")
env.put(Context.SECURITY_AUTHENTICATION, "simple")
env.put(Context.SECURITY_PRINCIPAL, "user#foo.com")
env.put(Context.SECURITY_CREDENTIALS, "pass")
env.put(Context.PROVIDER_URL, "ldap://adserver.foo.com:389")
val groupList = List("group1", "group2", "group3")
try {
val people = for {
group <- groupList
cn <- queryAD(base, s"(&(objectCategory=group)(name=${group}))", "member")
sam <- queryAD(cn, "(sAMAccountName=*)", "samaccountname")
name <- getName(cn)
} yield (group, sam, name)
people.foreach{case (g, s, n) => println(s"$g,$s,$n")}
} catch {
case e : Exception =>
println(e.getMessage)
println(e.getStackTrace)
}
def getName(cn: String): Option[String] = {
val regex = """^CN=([\w\s\d]*),.*$""".r
cn match {
case regex(name) => Some(name)
case _ => None
}
}
def queryAD(base: String, searchFilter: String, attribute: String): List[String] = {
val ctx = new InitialLdapContext(env, null)
val searchCtls = new SearchControls()
searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE)
searchCtls.setReturningAttributes(Array(attribute))
val answers = ctx.search(base, searchFilter, searchCtls)
var retVal = ArrayBuffer[String]()
while(answers.hasMoreElements) {
val answer = answers.next()
val member = answer.getAttributes.get(attribute).getAll
while(member.hasMoreElements) {
val person = member.next().toString
retVal += person
}
}
retVal.toList
}
}

Related

How to resolve current committed offsets differing from current available offsets?

I am attempting to read avro data from Kafka using Spark Streaming but I receive the following error message:
Streaming Query Exception caught!: org.apache.spark.sql.streaming.StreamingQueryException: Job aborted.
=== Streaming Query ===
Identifier: [id = 8b54c92d-6bbc-4dbc-84d0-55b762c21ba2, runId = 4bc92b3c-343e-4886-b0bc-0777b89f9ec8]
Current Committed Offsets: {KafkaV2[Subscribe[customer-avro4]]: {"customer-avro":{"0":17}}}
Current Available Offsets: {KafkaV2[Subscribe[customer-avro4]]: {"customer-avro":{"0":20}}}
Current State: ACTIVE
Thread State: RUNNABLE
Any idea on what the issue might be and how to resolve it? Code is the following (inspired from xebia-france spark-structured-streaming-blog). Actually, I think it ran earlier already but now there is a problem.
import com.databricks.spark.avro.SchemaConverters
import io.confluent.kafka.schemaregistry.client.{CachedSchemaRegistryClient, SchemaRegistryClient}
import io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer
import org.apache.avro.Schema
import org.apache.avro.generic.GenericRecord
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.streaming.StreamingQueryException
object AvroConsumer {
private val topic = "customer-avro4"
private val kafkaUrl = "http://localhost:9092"
private val schemaRegistryUrl = "http://localhost:8081"
private val schemaRegistryClient = new CachedSchemaRegistryClient(schemaRegistryUrl, 128)
private val kafkaAvroDeserializer = new AvroDeserializer(schemaRegistryClient)
private val avroSchema = schemaRegistryClient.getLatestSchemaMetadata(topic + "-value").getSchema
private val sparkSchema = SchemaConverters.toSqlType(new Schema.Parser().parse(avroSchema))
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder
.appName("ConfluentConsumer")
.master("local[*]")
.getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
spark.udf.register("deserialize", (bytes: Array[Byte]) =>
DeserializerWrapper.deserializer.deserialize(bytes)
)
val kafkaDataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", kafkaUrl)
.option("subscribe", topic)
.load()
val valueDataFrame = kafkaDataFrame.selectExpr("""deserialize(value) AS message""")
import org.apache.spark.sql.functions._
val formattedDataFrame = valueDataFrame.select(
from_json(col("message"), sparkSchema.dataType).alias("parsed_value"))
.select("parsed_value.*")
val writer = formattedDataFrame
.writeStream
.format("parquet")
.option("checkpointLocation", "hdfs://localhost:9000/data/spark/parquet/checkpoint")
while (true) {
val query = writer.start("hdfs://localhost:9000/data/spark/parquet/total")
try {
query.awaitTermination()
}
catch {
case e: StreamingQueryException => println("Streaming Query Exception caught!: " + e);
}
}
}
object DeserializerWrapper {
val deserializer: AvroDeserializer = kafkaAvroDeserializer
}
class AvroDeserializer extends AbstractKafkaAvroDeserializer {
def this(client: SchemaRegistryClient) {
this()
this.schemaRegistry = client
}
override def deserialize(bytes: Array[Byte]): String = {
val genericRecord = super.deserialize(bytes).asInstanceOf[GenericRecord]
genericRecord.toString
}
}
}
Figured it out - the problem was not as I had thought with the Spark-Kafka integration directly, but with the checkpoint information inside the hdfs filesystem instead. Deleting and recreating the checkpoint folder in hdfs solved it for me.

Dynamic compilation of multiple Scala classes at runtime

I know I can compile individual "snippets" in Scala using the Toolbox like this:
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
object Compiler {
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
def main(args: Array[String]): Unit = {
tb.eval(tb.parse("""println("hello!")"""))
}
}
Is there any way I can compile more than just "snippets", i.e., classes that refer to each other? Like this:
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
object Compiler {
private val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val a: String =
"""
|package pkg {
|
|class A {
|def compute(): Int = 42
|}}
""".stripMargin
val b: String =
"""
|import pkg._
|
|class B {
|def fun(): Unit = {
| new A().compute()
|}
|}
""".stripMargin
def main(args: Array[String]): Unit = {
val compiledA = tb.parse(a)
val compiledB = tb.parse(b)
tb.eval(compiledB)
}
}
Obviously, my snippet doesn't work as I have to tell the toolbox how to resolve "A" somehow:
Exception in thread "main" scala.tools.reflect.ToolBoxError: reflective compilation has failed:
not found: type A
Try
import scala.reflect.runtime.universe._
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val a = q"""
class A {
def compute(): Int = 42
}"""
val symbA = tb.define(a)
val b = q"""
class B {
def fun(): Unit = {
new $symbA().compute()
}
}"""
tb.eval(b)
https://github.com/scala/scala/blob/2.13.x/src/compiler/scala/tools/reflect/ToolBox.scala#L131-L138
In cases more complex than those the toolbox can handle, you can always run the compiler manually
import scala.reflect.internal.util.{AbstractFileClassLoader, BatchSourceFile}
import scala.reflect.io.{AbstractFile, VirtualDirectory}
import scala.tools.nsc.{Global, Settings}
import scala.reflect.runtime
import scala.reflect.runtime.universe
import scala.reflect.runtime.universe._
val a: String =
"""
|package pkg {
|
|class A {
| def compute(): Int = 42
|}}
""".stripMargin
val b: String =
"""
|import pkg._
|
|class B {
| def fun(): Unit = {
| println(new A().compute())
| }
|}
""".stripMargin
val directory = new VirtualDirectory("(memory)", None)
compileCode(List(a, b), List(), directory)
val runtimeMirror = createRuntimeMirror(directory, runtime.currentMirror)
val bInstance = instantiateClass("B", runtimeMirror)
runClassMethod("B", runtimeMirror, "fun", bInstance) // 42
def compileCode(sources: List[String], classpathDirectories: List[AbstractFile], outputDirectory: AbstractFile): Unit = {
val settings = new Settings
classpathDirectories.foreach(dir => settings.classpath.prepend(dir.toString))
settings.outputDirs.setSingleOutput(outputDirectory)
settings.usejavacp.value = true
val global = new Global(settings)
val files = sources.zipWithIndex.map { case (code, i) => new BatchSourceFile(s"(inline-$i)", code) }
(new global.Run).compileSources(files)
}
def instantiateClass(className: String, runtimeMirror: Mirror, arguments: Any*): Any = {
val classSymbol = runtimeMirror.staticClass(className)
val classType = classSymbol.typeSignature
val constructorSymbol = classType.decl(termNames.CONSTRUCTOR).asMethod
val classMirror = runtimeMirror.reflectClass(classSymbol)
val constructorMirror = classMirror.reflectConstructor(constructorSymbol)
constructorMirror(arguments: _*)
}
def runClassMethod(className: String, runtimeMirror: Mirror, methodName: String, classInstance: Any, arguments: Any*): Any = {
val classSymbol = runtimeMirror.staticClass(className)
val classType = classSymbol.typeSignature
val methodSymbol = classType.decl(TermName(methodName)).asMethod
val instanceMirror = runtimeMirror.reflect(classInstance)
val methodMirror = instanceMirror.reflectMethod(methodSymbol)
methodMirror(arguments: _*)
}
//def runObjectMethod(objectName: String, runtimeMirror: Mirror, methodName: String, arguments: Any*): Any = {
// val objectSymbol = runtimeMirror.staticModule(objectName)
// val objectModuleMirror = runtimeMirror.reflectModule(objectSymbol)
// val objectInstance = objectModuleMirror.instance
// val objectType = objectSymbol.typeSignature
// val methodSymbol = objectType.decl(TermName(methodName)).asMethod
// val objectInstanceMirror = runtimeMirror.reflect(objectInstance)
// val methodMirror = objectInstanceMirror.reflectMethod(methodSymbol)
// methodMirror(arguments: _*)
//}
def createRuntimeMirror(directory: AbstractFile, parentMirror: Mirror): Mirror = {
val classLoader = new AbstractFileClassLoader(directory, parentMirror.classLoader)
universe.runtimeMirror(classLoader)
}
dynamically parse json in flink map
Tensorflow in Scala reflection
How to eval code that uses InterfaceStability annotation (that fails with "illegal cyclic reference involving class InterfaceStability")?

hadoop distributed copy overwrite not working

I am trying to use the org.apache.hadoop.tools.DistCp class to copy some files over into a S3 bucket. However overwrite functionality is not working in spite of explicitly setting the overwrite flag to true
Copying works fine but it does not overwrite if there are existing files. The copy mapper skips those files. I have explicitly set the "overwrite" option to true.
import com.typesafe.scalalogging.LazyLogging
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path
import org.apache.hadoop.tools.{DistCp, DistCpOptions}
import org.apache.hadoop.util.ToolRunner
import scala.collection.JavaConverters._
object distcptest extends App with LazyLogging {
def copytoS3( hdfsSrcFilePathStr: String, s3DestPathStr: String) = {
val hdfsSrcPathList = List(new Path(hdfsSrcFilePathStr))
val s3DestPath = new Path(s3DestPathStr)
val distcpOpt = new DistCpOptions(hdfsSrcPathList.asJava, s3DestPath)
// Overwriting is not working inspite of explicitly setting it to true.
distcpOpt.setOverwrite(true)
val conf: Configuration = new Configuration()
conf.set("fs.s3n.awsSecretAccessKey", "secret key")
conf.set("fs.s3n.awsAccessKeyId", "access key")
conf.set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
val distCp: DistCp = new DistCp(conf, distcpOpt)
val filepaths: Array[String] = Array(hdfsSrcFilePathStr, s3DestPathStr)
try {
val distCp_result = ToolRunner.run(distCp, filepaths)
if (distCp_result != 0) {
logger.error(s"DistCP has failed with - error code = $distCp_result")
}
}
catch {
case e: Exception => {
e.printStackTrace()
}
}
}
copytoS3("hdfs://abc/pqr", "s3n://xyz/wst")
}
I think the problem is you called ToolRunner.run(distCp, filepaths).
If you check the source code of DistCp, in run method will overwrite inputOptions, so the DistCpOptions passed to constructor will not work.
#Override
public int run(String[] argv) {
...
try {
inputOptions = (OptionsParser.parse(argv));
...
} catch (Throwable e) {
...
}
...
}

how to use try-resources in kotlin?

I am trying to use kotlin instead of Java, I cannot find a good way to do with try resource:
Java Code like this:
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.TensorFlow;
public class HelloTensorFlow {
public static void main(String[] args) throws Exception {
try (Graph g = new Graph()) {
final String value = "Hello from " + TensorFlow.version();
// Construct the computation graph with a single operation, a constant
// named "MyConst" with a value "value".
try (Tensor t = Tensor.create(value.getBytes("UTF-8"))) {
// The Java API doesn't yet include convenience functions for adding operations.
g.opBuilder("Const", "MyConst").setAttr("dtype", t.dataType()).setAttr("value", t).build();
}
// Execute the "MyConst" operation in a Session.
try (Session s = new Session(g);
// Generally, there may be multiple output tensors,
// all of them must be closed to prevent resource leaks.
Tensor output = s.runner().fetch("MyConst").run().get(0)) {
System.out.println(new String(output.bytesValue(), "UTF-8"));
}
}
}
}
I do it in kotlin, I have to do this:
fun main(args: Array<String>) {
val g = Graph();
try {
val value = "Hello from ${TensorFlow.version()}"
val t = Tensor.create(value.toByteArray(Charsets.UTF_8))
try {
g.opBuilder("Const", "MyConst").setAttr("dtype", t.dataType()).setAttr("value", t).build()
} finally {
t.close()
}
var sess = Session(g)
try {
val output = sess.runner().fetch("MyConst").run().get(0)
println(String(output.bytesValue(), Charsets.UTF_8))
} finally {
sess?.close()
}
} finally {
g.close()
}
}
I have try to use use like this:
Graph().use {
it -> ....
}
I got error like this:
Error:(16, 20) Kotlin: Unresolved reference. None of the following candidates is applicable because of receiver type mismatch:
#InlineOnly public inline fun ???.use(block: (???) -> ???): ??? defined in kotlin.io
I just use wrong dependency:
compile "org.jetbrains.kotlin:kotlin-stdlib"
replace it with:
compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8"

Jenkins groovy init script for sonarqube configuration

I am trying to set sonarqube settings in Jenkins system property using groovy init script but I am getting below error. Can somebody help me to resolve this?
Error
+++++
groovy.lang.GroovyRuntimeException: Could not find matching constructor for:
hudson.plugins.sonar.SonarInstallation(java.lang.String, java.lang.String,
java.lang.String, hudson.plugins.sonar.model.TriggersConfig,
java.lang.String)
at groovy.lang.MetaClassImpl.invokeConstructor(MetaClassImpl.java:1732)
at groovy.lang.MetaClassImpl.invokeConstructor(MetaClassImpl.java:1532)
This is the script that I am using
import hudson.model.*
import jenkins.model.*
import hudson.plugins.sonar.SonarGlobalConfiguration
import hudson.plugins.sonar.*
import hudson.plugins.sonar.model.TriggersConfig
import hudson.tools.*
def inst = Jenkins.getInstance()
println "--> Configuring SonarQube"
SonarGlobalConfiguration global = Hudson.instance.getDescriptorByType(SonarGlobalConfiguration.class)
def sonar_inst = new SonarInstallation(
"SonarQ",
"http://localhost:9000",
"yy", // Token
new TriggersConfig(),
""
)
// Only add ADOP Sonar if it does not exist - do not overwrite existing config
def sonar_installations = sonar_conf.getInstallations()
def sonar_inst_exists = false
sonar_installations.each {
installation = (SonarInstallation) it
if (sonar_inst.getName() == installation.getName()) {
sonar_inst_exists = true
println("Found existing installation: " + installation.getName())
}
}
if (!sonar_inst_exists) {
sonar_installations += sonar_inst
sonar_conf.setInstallations((SonarInstallation[]) sonar_installations)
sonar_conf.save()
}
You missed some parameters. SonarInstallation constructor needs 7 parameters, not 5:
#DataBoundConstructor
public SonarInstallation(String name,
String serverUrl, String serverAuthenticationToken,
String mojoVersion, String additionalProperties, TriggersConfig triggers,
String additionalAnalysisProperties) {
this.name = name;
this.serverUrl = serverUrl;
this.serverAuthenticationToken = serverAuthenticationToken;
this.additionalAnalysisProperties = additionalAnalysisProperties;
this.mojoVersion = mojoVersion;
this.additionalProperties = additionalProperties;
this.triggers = triggers;
}

Categories

Resources