I'm new to Infinispan and I am currently trying to use the search functionality. I have tried following closely with the documentation. First I update my cache.xml:
<namedCache
name="cache">
<transaction transactionMode="NON_TRANSACTIONAL"/>
<indexing enabled="true" indexLocalOnly="true"/>
</namedCache>
I am trying to write a query that will give me a list of results of the attribute of the search_value. Here is the Java code that I have:
SearchManager searchManager = org.infinispan.query.Search.getSearchManager(cache);
Term t = new Term("attribute_name", search_value);
Query q = new TermQuery(t);
CacheQuery cacheQuery = searchManager.getQuery(q);
List<Object> found = cacheQuery.list();
However, when I try to run the tests, I am getting this error:
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#.\Key\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1098)
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
13/08/02 14:08:25 ERROR lucene.LuceneBackendQueueTask: HSEARCH000072: Couldn't open the IndexWriter because of previous error: operation skipped, index ouf of sync!
What is causing the writelock error? I have even tried to remove the Java part with just the index configuration and the same error exists. Am I not configuring the cache correctly? Any help is greatly appreciated! Thank you!
The underlying search engine, based on Hibernate Search and Apache Lucene, uses locks to request exclusive write access to the index.
Assuming you don't have a second application attempting to write on the same index directory, you likely have a left-over lock file in the directory from a killed JVM or hardware crash.
Look for a directory being created on the filesystem having the same name as the index (indexes) you're using and remove the marker file write.lock (if you're sure that no process is writing it).
You could also configure a different locking strategy, but make sure you won't shoot yourself:
LockFactory configuration
If this is not a left-over lock from a previously killed JVM, then it could be that you're running multiple Infinispan instances. Make sure they store the indexes in separate base directories (the default is the current path you're starting the process from).
Directory configuration
Related
I'm trying to access the deserialize static method within the hsqldb (2.5.1) InOutUtil class. When I run it, java -cp hsqldb.jar:. testcode
I get:
java.sql.SQLSyntaxErrorException: user lacks privilege or object not found: org.hsqldb.lib.InOutUtil.deserialize
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.execute(Unknown Source)
at testcode.main(testcode.java:58)
Caused by: org.hsqldb.HsqlException: user lacks privilege or object not found: org.hsqldb.lib.InOutUtil.deserialize
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.result.Result.getException(Unknown Source)
... 4 more
Code:
...
connection = DriverManager.getConnection(dburl, "sa", "");
statement = connection.createStatement();
statement.execute("call \"java.lang.System.setProperty\"('org.apache.commons.collections.enableUnsafeSerialization','true')");
statement.execute("call \"org.hsqldb.lib.InOutUtil.deserialize\"('" + my_object +"');");
...
This is the offending line that throws the exception:
statement.execute("call \"org.hsqldb.lib.InOutUtil.deserialize\"('" + my_object +"');");
What I'm trying to do is reproduce this exploit, https://github.com/Critical-Start/Team-Ares/tree/master/CVE-2020-5902, on a local instance of hsqldb.
Not sure what I'm doing wrong. Thanks!
The exploit you linked to refers to HSQLDB version 1.8.0 which has been obsolete since the release of version 2.0 in 2010. However, aspects of the the security framework remain the same up to the latest version of HyperSQL.
A database user with even the DBA credentials cannot execute any arbitrary static method that happens to be in the classpath of the database server. A sysadmin who starts the database server can issue an allow-list of the specific static methods that are allowed to run as callable procedures, using the hsqldb.method_class_names Java System property with the list. See: http://hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#src_jrt_access_control
The listed safe static methods can then be turned into SQL callable procedures only by DBA credentials. EXECUTE privileges on the procedures are granted by the DBA.
Versions 2.x of HyperSQL generally improve upon the older security framework, for example allows secure password hash algorithms, password check and retention policies, including external authentication via LDAP and other frameworks.
You may succeed if you "try harder" and find the way HSQLDB expects parameter type definitions.
With HSQLDB 2.7.1 the maintainer's answer is enforced by either the absence or the presence of the server's system property (in addition to the connecting user's privilege needed to execute procedures). The server requires an explicit permission in its own Java properties such as -Dhsqldb.method_class_names="org.hsqldb.lib.StringConverter.*". (Using just the class name did not enable its static methods for me).
$ java -jar ~/hsqldb-svn-trunk/lib/sqltool.jar --inlineRc="url=jdbc:hsqldb:hsql://localhost/test,user=sa,password=" --sql="CALL \"org.hsqldb.lib.StringConverter.getUTFSize\"('java.class.path');"
15
With older HSQLDBs (or with a dangerously wide class name permission in 2.7.1), the only hurdle is to correctly declare the Java method's parameter types. I already figured that byte corresponds to TINYINT. I just need to learn the array mapping.
$ nl=$'\n'; java -cp ~/hsqldb-svn-trunk/lib/sqltool.jar org.hsqldb.cmdline.SqlTool --inlineRc="url=jdbc:hsqldb:hsql://localhost/test,user=sa,password=" --sql="CREATE FUNCTION TEST (BINARY) RETURNS BINARY NO SQL LANGUAGE JAVA EXTERNAL NAME 'CLASSPATH:org.hsqldb.lib.InOutUtil.deserialize';${nl}.;"
Feb 14, 2023 2:29:58 PM org.hsqldb.cmdline.SqlTool objectMain
SEVERE: SQL Error at '--sql' line 2:
"CREATE FUNCTION TEST (BINARY) RETURNS BINARY NO SQL LANGUAGE JAVA EXTERNAL NAME 'CLASSPATH:org.hsqldb.lib.InOutUtil.deserialize';"
user lacks privilege or object not found: org.hsqldb.lib.InOutUtil
org.hsqldb.cmdline.SqlTool$SqlToolException
I have a Spark job that processes several folders on S3 per run and stores its state on DynamoDB. In other words, we're running the job once per day, it looks for new folders added by another job, transforms them one-by-one and writes state to DynamoDB. Here's rough pseudocode:
object App {
val allFolders = S3Folders.list()
val foldersToProcess = DynamoDBState.getFoldersToProcess(allFolders)
Transformer.run(foldersToProcess)
}
object Transformer {
def run(folders: List[String]): Unit = {
val sc = new SparkContext()
folders.foreach(process(sc, _))
}
def process(sc: SparkContext, folder: String): Unit = ??? // transform and write to S3
}
This approach works well if S3Folders.list() returns relatively small amount of folders (up to few thousands), if it returns more (4-8K) very often we see following error (that in first glance has nothing to do with Spark):
17/10/31 08:38:20 ERROR ApplicationMaster: User class threw exception: shadeaws.SdkClientException: Failed to sanitize XML document destined for handler class shadeaws.services.s3.model.transform.XmlResponses
SaxParser$ListObjectsV2Handler
shadeaws.SdkClientException: Failed to sanitize XML document destined for handler class shadeaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler
at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:214)
at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(XmlResponsesSaxParser.java:315)
at shadeaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:88)
at shadeaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:77)
at shadeaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at shadeaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at shadeaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at shadeaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1553)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1271)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
at shadeaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at shadeaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at shadeaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at shadeaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at shadeaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247)
at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4188)
at shadeaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:865)
at me.chuwy.transform.S3Folders$.com$chuwy$transform$S3Folders$$isGlacierified(S3Folders.scala:136)
at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267)
at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104)
at me.chuwy.transform.S3Folders$.list(S3Folders.scala:112)
at me.chuwy.transform.Main$.main(Main.scala:22)
at me.chuwy.transform.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
Caused by: shadeaws.AbortedException:
at shadeaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:53)
at shadeaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:81)
at shadeaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.read1(BufferedReader.java:210)
at java.io.BufferedReader.read(BufferedReader.java:286)
at java.io.Reader.read(Reader.java:140)
at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:186)
... 36 more
For big amount of folders (~20K) this happens all the time and job cannot start.
Previously we had very similar, but much more frequent error when getFoldersToProcess did GetItem for every folder from allFolders and therefore took much longer:
17/09/30 14:46:07 ERROR ApplicationMaster: User class threw exception: shadeaws.AbortedException:
shadeaws.AbortedException:
at shadeaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:51)
at shadeaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:71)
at shadeaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.ensureLoaded(ByteSourceJsonBootstrapper.java:489)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.detectEncoding(ByteSourceJsonBootstrapper.java:126)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.constructParser(ByteSourceJsonBootstrapper.java:215)
at com.fasterxml.jackson.core.JsonFactory._createParser(JsonFactory.java:1240)
at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:802)
at shadeaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:109)
at shadeaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:43)
at shadeaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at shadeaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1503)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1226)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at shadeaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at shadeaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at shadeaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at shadeaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at shadeaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2089)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2065)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.executeGetItem(AmazonDynamoDBClient.java:1173)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:1149)
at me.chuwy.tranform.sdk.Manifest$.contains(Manifest.scala:179)
at me.chuwy.tranform.DynamoDBState$$anonfun$getUnprocessed$1.apply(ProcessManifest.scala:44)
at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267)
at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104)
at me.chuwy.transform.DynamoDBState$.getFoldersToProcess(DynamoDBState.scala:44)
at me.chuwy.transform.Main$.main(Main.scala:19)
at me.chuwy.transform.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
I believe that current error has nothing to do with XML parsing or invalid response, but originate from some race condition inside Spark, because:
There's clear connection between amount of time "state-fetching" takes and chance of failure
Tracebacks have underlying AbortedException, which AFAIK caused by swallowed InterruptedException, which can mean something inside JVM (spark-submit or even YARN) calls Thread.sleep for main thread.
Right now I'm using EMR AMI 5.5.0, Spark 2.1.0 and shaded AWS SDK 1.11.208, but had similar error with AWS SDK 1.10.75.
I'm deploying this job on EMR via command-runner.jar spark-submit --deploy-mode cluster --class ....
Does anyone have any idea where does this exception originate from and how to fix it?
foreach does not guarantee orderly computations and it applies the operation(s) to each element of an RDD, meaning that it will instantiate for every element which, in turn, may overwhelm the executor.
The problem was that getFoldersToProcess is a blocking (and very long) operation, which prevents SparkContext from being instantiated. SpackContext itself should signal about own instantiation to YARN and if it doesn't help in a certain amount of time - YARN assumes that driver node has fallen off and kills the whole cluster.
Good afternoon,
When I attempt to use the SMTP Appender, I get an odd error in the console. The error appears to occur when the XML file is loaded, as it isn't logged via any output stream except stdout. The contents of the appender are as follows (and I have confirmed that the error is in this block of XML). I've removed our server information.
<SMTP name="Mailer">
<Subject>[ERROR] (software name) on ${hostName} has thrown a fatal error</Subject>
<To>(a valid email in the form of a#a.com)</To>
<From>(a valid email in the form of a#a.com)</From>
<SMTPHost>(The Internal IP address of a server in the form of 192.168.0.1)</SMTPHost>
<SMTPPort>587</SMTPPort>
<BufferSize>512</BufferSize>
</SMTP>
Whether or not it is referenced in a logger, I receive the following error immediately upon running the program and running getLogger from the log manager. I've removed a couple of the file names and replaced them with a rough description of what the file is doing at that point.
2015-05-19 19:08:18,812 ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.SmtpAppender for element SMTP. java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:137)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:766)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:706)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:698)
at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:358)
at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:161)
at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:361)
at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:426)
at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:442)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:138)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:147)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:175)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:426)
at (Our Log Interface Class, which essentially just returns the log passed by the Apache log manager)
at (The global variable definition file, the first method to use getLogger)
at (The main method for our software, where the globals are loaded)
Caused by: java.lang.NoSuchMethodError: javax.mail.Session.setProtocolForAddress(Ljava/lang/String;Ljava/lang/String;)V
at org.apache.logging.log4j.core.net.SmtpManager$SMTPManagerFactory.createManager(SmtpManager.java:325)
at org.apache.logging.log4j.core.net.SmtpManager$SMTPManagerFactory.createManager(SmtpManager.java:299)
at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:71)
at org.apache.logging.log4j.core.net.SmtpManager.getSMTPManager(SmtpManager.java:124)
at org.apache.logging.log4j.core.appender.SmtpAppender.createAppender(SmtpAppender.java:142)
... 21 more
2015-05-19 19:08:18,814 ERROR Null object returned for SMTP in Appenders.
2015-05-19 19:08:18,819 ERROR Unable to locate appender Mailer for logger fatalerror
The details of the configuration are correct (I know it's a valid IP, etc) - they worked in log4j 1. The error log tells me virtually nothing about the error, so I am hoping someone has heard of this before. Thanks everyone!
This is the important line from your stacktrace:
Caused by: java.lang.NoSuchMethodError: javax.mail.Session.setProtocolForAddress(Ljava/lang/String;Ljava/lang/String;)V
The setProtocolForAddress method has been part of JavaMail since version 1.4, so this suggests to me that you are using an old version of that JAR. If you are, try upgrading to a later version.
Note that log4j2 has many lookups, not just system properties. So you need to convert ${hostName} to ${sys:hostName}, and if you are using a lookup for the IP address, this may be the cause of the issue. (Try hardcoding all lookup values to exclude this possibility.)
I have an error that I sometimes met. The error stack trace is below;
2011-05-04-xWorkerPool-1-thread-2--FATAL-su.games.engine.communication.gameSocketDataHandler:ServiceSocketDataHandler.onData:
could not write. channel is close or not initialized (id=25c1031512fb560155a71db6548S1517c (closed))-----
org.xsocket.connection.ExtendedClosedChannelException: could not write. channel is close or not initialized (id=25c1031512fb560155a71db6548S1517c (closed))
at org.xsocket.connection.AbstractNonBlockingStream.ensureStreamIsOpenAndWritable(AbstractNonBlockingStream.java:1537)
at org.xsocket.connection.AbstractNonBlockingStream.write(AbstractNonBlockingStream.java:1054)
at org.xsocket.connection.AbstractNonBlockingStream.write(AbstractNonBlockingStream.java:1039)
at su.games.engine.communication.ServiceSocketDataHandler.onData(ServiceSocketDataHandler.java:63)
at org.xsocket.connection.HandlerAdapter.performOnData(HandlerAdapter.java:242)
at org.xsocket.connection.HandlerAdapter.access$200(HandlerAdapter.java:42)
at org.xsocket.connection.HandlerAdapter$PerformOnDataTask.run(HandlerAdapter.java:210)
at org.xsocket.SerializedTaskQueue.performPendingTasks(SerializedTaskQueue.java:161)
at org.xsocket.SerializedTaskQueue.access$100(SerializedTaskQueue.java:40)
at org.xsocket.SerializedTaskQueue$MultithreadedTaskProcessor.run(SerializedTaskQueue.java:189)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
I use the xSocket 2.8.15 version(jar). Sorry for the tag I have permission issue to create a new tag.The error can be defined as follows. I have number of INonBlockingConnection object and when I try to write some data by using the nbc.write() I get an error as defined above. I search the google and visit the xSocket mailing list. But I can not find any solution. I need some help. Thanks the site StackOverflow and help sorry for the English. I am waiting your advice.
KingSpeech.
The error appears fairly self explanatory. It appears that your connection has been closed or failed to connect. This is most likely due to a problem on the other side (or your network)
I am basically walking the LDAP tree in Active Directory.
At each level I query for "(objectClass=*)". When I do this on the root eg "dc=example,dc=com" I get the exception below. This works fine on our other LDAP instances. For some reason only on our Active Directory server I get this exception. I also get the same exception when using JXplorer on our Active Directory server.
From reading around online I found people saying you should turn on following, not sure what that means... So on my controls object (javax.naming.directory.SearchControls) that I pass with the query I call searchControls.setDerefLinkFlag(true). I also have tried setting it to false with the same result. Any a suggestions on what else could cause this? Maybe how I could fix it?
Note: In this post I changed the baseDn from dc=<my company domain> to example for my companies privacy.
javax.naming.PartialResultException: Unprocessed Continuation Reference(s); remaining name 'dc=example,dc=com'
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2820)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321)
at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248)
at com.motio.pi.gui.panels.useraccess.ldap.LDAPConnector.query(LDAPConnector.java:262)
at com.motio.pi.gui.selector.directory.CognosDirectoryBrowserController.expandCognosTreeNode(CognosDirectoryBrowserController.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.motio.pi.utils.PIThreadDelegate$1.run(PIThreadDelegate.java:54)
at java.lang.Thread.run(Thread.java:662)
So when I was creating my naming context with the method:
javax.naming.ldap.InitialLdapContext.InitialLdapContext(
Hashtable<?, ?> environment, Control[] connCtls)
In the argument environment there is a property with the name Context.REFERRAL and its value should be set to: follow. This was the setting that I needed.
If you get an exception while referral usage in follow (for example: connection timed out) you can use referral ignore but you dont want to get partial exception you can use 3268 port number instead of 389 this port is using global catalog for ldap. You can find info from following link;
https://technet.microsoft.com/en-us/library/how-global-catalog-servers-work(v=ws.10).aspx