Couchbase + Scala + Java SDK : IndexOutOfBounds - java

I am relatively new to scala and to couchbase, but I need to learn both fast. Recently while trying to run a sample application using the Java couchbase SDK through Scala I have run into the following problem.
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.IndexOutOfBoundsException: Index: 1854, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.couchbase.client.core.config.DefaultCouchbaseBucketConfig.nodeIndexForMaster(DefaultCouchbaseBucketConfig.java:135)
at com.couchbase.client.core.node.locate.KeyValueLocator.calculateNodeId(KeyValueLocator.java:165)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateForCouchbaseBucket(KeyValueLocator.java:124)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateAndDispatch(KeyValueLocator.java:84)
at com.couchbase.client.core.RequestHandler.dispatchRequest(RequestHandler.java:219)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:176)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:71)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[error] (run-main-0) java.lang.RuntimeException: java.util.concurrent.TimeoutException
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
[trace] Stack trace suppressed: run last compile:run for the full output.
[cb-core-3-1] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Response Events null
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
And this is the code that generated the error
import com.couchbase.client.java._
import com.couchbase.client.core.time._
import com.couchbase.client.java.document._
import com.couchbase.client.java.document.json._
import com.couchbase.client.java.query._
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment
object App {
def main(args: Array[String]): Unit = {
// Initialize the Connection
// Connects to localhost
val env = DefaultCouchbaseEnvironment.builder()
.connectTimeout(5000)
.bootstrapCarrierEnabled(false)
.build()
val cluster = CouchbaseCluster.create(env, "127.0.0.1")
// Opens the "default" bucket
val bucket = cluster.openBucket("default")
// Create a JSON Document
val user: JsonObject = JsonObject.create()
.put("firstname", "Walter")
.put("lastname", "White")
.put("job", "chemistry teacher")
.put("age", 50)
val stored: JsonDocument = bucket.upsert(JsonDocument.create("walter", user));
// Load the Document and print it
// Prints Content and Metadata of the stored Document
println(bucket.get("walter"))
// Just close a single bucket
bucket.close();
// Disconnect and close all buckets
cluster.disconnect();
}
}
EDIT:
I am a little new to this but here is what I managed to get out of the debugger.
this = {DefaultCouchbaseBucketConfig#2385} "DefaultCouchbaseBucketConfig{name='testBucket', locator=VBUCKET, uri='/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', streamingUri='/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', nodeInfo=[NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}], partitionInfo=PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}, tainted=false, rev=23}"
partitionInfo = {CouchbasePartitionInfo#2391} "PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}"
numberOfReplicas = 1
partitionHosts = {String[1]#2428}
partitions = {ArrayList#2390} size = 0
forwardPartitions = null
tainted = false
partitionHosts = {ArrayList#2396} size = 1
0 = {DefaultNodeInfo#2425} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=8091, directServices={CONFIG=8091, BINARY=11210, VIEW=8092}, sslServices={}}"
nodesWithPrimaryPartitions = {HashSet#2397} size = 0
tainted = false
rev = 23
name = "testBucket"
value = {char[10]#2422}
hash = 1241531676
password = ""
value = {char[0]#2421}
hash = 0
locator = {BucketNodeLocator#2400} "VBUCKET"
name = "VBUCKET"
ordinal = 0
uri = "/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[78]#2411}
hash = 0
streamingUri = "/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[87]#2420}
hash = 0
nodeInfo = {ArrayList#2403} size = 1
0 = {DefaultNodeInfo#2408} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}"
enabledServices = 15
partition = 7620
useFastForward = false
EDIT 2:
I had a look at the Couchbase Console log and I am constantly getting the following
Service 'memcached' exited with status 1. Restarting. Messages: Failed to open library "/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so": dlopen(/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.dylib, 6): image not found
Unable to load extension /Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so using the config
Any help on the matter would be appreciated.
Many thanks.

Related

Stacking of different models (including rf, glm) in h2o (R)

I have a question regarding h2o.stackedEnsemble in R. When I try to create an ensemble from GLM models (or any other models and GLM) I get the following error:
DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
at water.MRTask.getResult(MRTask.java:478)
at water.MRTask.getResult(MRTask.java:486)
at water.MRTask.doAll(MRTask.java:390)
at water.MRTask.doAll(MRTask.java:396)
at hex.StackedEnsembleModel.predictScoreImpl(StackedEnsembleModel.java:123)
at hex.StackedEnsembleModel.doScoreMetricsOneFrame(StackedEnsembleModel.java:194)
at hex.StackedEnsembleModel.doScoreOrCopyMetrics(StackedEnsembleModel.java:206)
at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeMetaLearner(StackedEnsemble.java:302)
at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeImpl(StackedEnsemble.java:231)
at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:206)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1263)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.lang.NullPointerException
Error: DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
The error does not occur when I stack any other models, only appears with the GLM. Of course I use the same folds for cross-validation.
Some sample code for training the model and the ensemble:
glm_grid <- h2o.grid(algorithm = "glm",
family = 'binomial',
grid_id = "glm_grid",
x = predictors,
y = response,
seed = 1,
fold_column = "fold_assignment",
training_frame = train_h2o,
keep_cross_validation_predictions = TRUE,
hyper_params = list(alpha = seq(0, 1, 0.05)),
lambda_search = TRUE,
search_criteria = search_criteria,
balance_classes = TRUE,
early_stopping = TRUE)
glm <- h2o.getGrid("glm_grid",
sort_by="auc",
decreasing=TRUE)
ensemble <- h2o.stackedEnsemble(x = predictors,
y = response,
training_frame = train_h2o,
model_id = "ens_1",
base_models = glm#model_ids[1:5])
This is a bug, and you can track the progress of the fix here (this should be fixed in the next release, but it might be fixed sooner and available on the nightly releases).
I was going to suggest training the GLMs in a loop or apply function (instead of using h2o.grid()) as a temporary work-around, but unfortunately, the same error happens.

Apache Ignite - java.lang.ClassNotFoundException: Unknown pair

this is my last attempt to configure Apache Ignite 2.0 to work with Cassandra as persistence layer and ODBC as query layer.
ODBC configuration is ok, I am able to put and get data in the cache with sql, but when I plug in Cassandra (version 3.9 via docker image) as persistence layer I get this:
java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=1262449073]
I tried googling around for this exception but got no useful hint.
Here's my Ignite configuration:
boolean persistence = true;
IgniteConfiguration cfg = new IgniteConfiguration();
CacheConfiguration<String, ValueClass> configuration = new CacheConfiguration<String, ValueClass>();
configuration.setName("test-cache");
configuration.setIndexedTypes(String.class, ValueClass.class);
if(persistence){
// Configuring Cassandra's persistence
DataSource dataSource = new DataSource();
dataSource.setContactPoints("172.17.0.2");
RoundRobinPolicy robinPolicy = new RoundRobinPolicy();
dataSource.setLoadBalancingPolicy(robinPolicy);
dataSource.setReadConsistency("ONE");
dataSource.setWriteConsistency("ONE");
String persistenceSettingsXml = FileUtils.readFileToString(new File(persistenceSettingsConfig), "utf-8");
KeyValuePersistenceSettings persistenceSettings = new KeyValuePersistenceSettings(persistenceSettingsXml);
CassandraCacheStoreFactory cacheStoreFactory = new CassandraCacheStoreFactory();
cacheStoreFactory.setDataSource(dataSource);
cacheStoreFactory.setPersistenceSettings(persistenceSettings);
configuration.setCacheStoreFactory(cacheStoreFactory);
configuration.setWriteThrough(true);
configuration.setReadThrough(true);
configuration.setWriteBehindEnabled(true);
}
// Setting cache configuration
cfg.setCacheConfiguration(configuration);
// Configuring ODBC
OdbcConfiguration odbcConfig = new OdbcConfiguration();
odbcConfig.setMaxOpenCursors(100);
cfg.setOdbcConfiguration(odbcConfig);
// Starting Ignite
Ignite ignite = Ignition.start(cfg);
ValueClass:
public class ValueClass implements Serializable{
#QuerySqlField
private Integer numberOne;
#QuerySqlField
private Integer numberTwo;
public Integer getNumberOne(){ return numberOne; }
public Integer getNumberTwo(){ return numberTwo; }
public void setNumberOne(Integer value){
numberOne = value;
}
public void setNumberTwo(Integer value){
numberTwo = value;
}
}
Persistence configuration:
<persistence keyspace="ignite" table="odbc_test" ttl="86400">
<keyspaceOptions>
REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1}
AND DURABLE_WRITES = true
</keyspaceOptions>
<tableOption>
comment = 'Cache test'
AND read_repair_chance = 0.2
</tableOption>
<keyPersistence class="java.lang.String" strategy="PRIMITIVE" column="key" />
<valuePersistence class="com.riccamini.ignite.ValueClass" strategy="POJO">
<value name="numberOne" column="number_one"/>
<value name="numberTwo" column="number_two"/>
</valuePersistence>
</persistence>
Complete stack trace:
SEVERE: <test-cache> Unexpected exception during cache update
class org.apache.ignite.IgniteException: Runtime failure on search row: org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$SearchRow#6380d269
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1615)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:925)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:326)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2386)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1792)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1630)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1162)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:651)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2345)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.putIfAbsent(GridCacheAdapter.java:2720)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:829)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:369)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:164)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsTwoStep(DmlStatementsProcessor.java:198)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1659)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1659)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2103)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:806)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.executeQuery(OdbcRequestHandler.java:213)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.handle(OdbcRequestHandler.java:108)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:124)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:33)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:701)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1745)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:794)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:273)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:161)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:148)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1730)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:555)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4404)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4226)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:3966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:2966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2860)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invokeDown(BPlusTree.java:1696)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1585)
... 37 more
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:385)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:335)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:692)
... 53 more
Jun 28, 2017 10:02:41 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to execute SQL query [reqId=2, req=OdbcQueryExecuteRequest [cacheName=test-cache, sqlQry=INSERT INTO valueclass (_key, numberone, numbertwo) VALUES ('testkey2', 10, 10), args=[]]]
javax.cache.CacheException: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO valueclass (_key, numberone, numbertwo) VALUES ('testkey2', 10, 10), params=[]]
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:818)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.executeQuery(OdbcRequestHandler.java:213)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.handle(OdbcRequestHandler.java:108)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:124)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:33)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO valueclass (_key, numberone, numbertwo) VALUES ('testkey2', 10, 10), params=[]]
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1662)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1659)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2103)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:806)
... 12 more
Caused by: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [testkey2]
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:250)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1885)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1630)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1162)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:651)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2345)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.putIfAbsent(GridCacheAdapter.java:2720)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:829)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:369)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:164)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsTwoStep(DmlStatementsProcessor.java:198)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1659)
... 18 more
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node.
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1883)
... 32 more
Suppressed: class org.apache.ignite.IgniteException: Runtime failure on search row: org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$SearchRow#6380d269
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1615)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:925)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:326)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2386)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1792)
... 32 more
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:701)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1745)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:794)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:273)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:161)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:148)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1730)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:555)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4404)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4226)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:3966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:2966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2860)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invokeDown(BPlusTree.java:1696)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1585)
... 37 more
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:385)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:335)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:692)
... 53 more
Any suggestion is really appreciated.
Please try to turn on isStoreKeepBinary in cache settings - like this; please note the last line:
if (persistence){
// Configuring Cassandra's persistence
DataSource dataSource = new DataSource();
// ...here go the rest of your settings as they appear now...
configuration.setWriteBehindEnabled(true);
configuration.setStoreKeepBinary(true);
}
This setting forces Ignite to avoid binary deserialization when working with underlying cache store.

I am getting an error when using flume

I am using the below configuration
ClouderaTwitterAgent.sources = Twitter
ClouderaTwitterAgent.channels = MemChannel
ClouderaTwitterAgent.sinks = HDFS
ClouderaTwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
ClouderaTwitterAgent.sources.Twitter.channels = MemChannel
ClouderaTwitterAgent.sources.Twitter.consumerKey = xxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.consumerSecret = xxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.accessToken = xxxxxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.accessTokenSecret = xxxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.keywords = Sully
ClouderaTwitterAgent.sinks.HDFS.channel = MemChannel
ClouderaTwitterAgent.sinks.HDFS.type = hdfs
ClouderaTwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/tweets
ClouderaTwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
ClouderaTwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
ClouderaTwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
ClouderaTwitterAgent.sinks.HDFS.hdfs.rollSize = 0
ClouderaTwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
ClouderaTwitterAgent.channels.MemChannel.type = memory
ClouderaTwitterAgent.channels.MemChannel.capacity = 10000
ClouderaTwitterAgent.channels.MemChannel.transactionCapacity = 100
and this is the command that I am using to run flume
`bin/flume-ng agent --conf ./conf/ -f conf/flume-cloudera.conf -Dflume.root.logger=DEBUG,console -n ClouderaTwitterAgent`
and this is the error that I am getting
2016-09-20 14:53:14,245 (Twitter4J Async Dispatcher[0]) [DEBUG - com.cloudera.flume.source.TwitterSource$1.onStatus(TwitterSource.java:121)] tweet arrived
2016-09-20 14:53:16,073 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://localhost:9000/user/tweets/FlumeData.1474363316543.tmp
2016-09-20 14:53:16,113 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)] process failed
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:232)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:718)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:703)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:605)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2554)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2546)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2412)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
... 21 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.JniBasedUnixGroupsMapping
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
... 26 more
2016-09-20 14:53:16,121 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR-org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
Can anyone please help me. I am new to this and I got this configuration from the internet and I followed every instruction.
I even searched for any solution to this problem.
I will list out what I have done
1.I checked my system clock
2.removed the hdfs folder and then made a new folder
3.formatted the namenode
4.restarted the agent several times

Selenium testing with lists for load performance

I'm attempting to use a unit test of mine for "load" testing on our browser. For various reasons, we have seen performance degradation on the browser side, because we heavily rely on the print dialog.
I have the following unit test working via ScalaTest:
class LoadPrePaidSpec extends FlatSpec with Matchers with Chrome with Eventually {
implicit override val patienceConfig =
PatienceConfig(timeout = scaled(Span(40, Seconds)), interval = scaled(Span(100, Millis)))
def build(csvLine:String):TestCSVHolder ={
val split = csvLine.split(",")
TestCSVHolder(memberId = split(0), preSaleCode = split(1),
prePaidCode = split(2), lastName = split(3), firstName = split(4), badgeName = split(5))
}
def memberHelper(member: TestCSVHolder): Unit = {
//insert member id via prepaid code
textField("member_id").value = member.prePaidCode
//fire keyup event
executeScript("var eventToFire=jQuery.Event(\"keyup\");eventToFire.keyCode=221;eventToFire.which=221;" +
"$(\"#member_id\").trigger(eventToFire)")
eventually {
val eles = webDriver.findElements(By.xpath(s"//*[contains(#id, '${member.memberId}')]"))
eles.get(0).getTagName
//We remove the head element because it just says Prep For Print
val tdEles = (eles.get(0).findElements(By.tagName("td")).toList.tail)
tdEles(0).getText() should be(member.lastName)
tdEles(1).getText() should be(member.firstName)
tdEles(2).getText() should be(member.badgeName)
}
}
"Scanning an ID" should "look up the member" in {
val member = new TestCSVHolder("100001", "ABCD", "[-100001-ABCD]", "John", "Doe", "JohnDoe")
go to (url)
//login
textField("user_name").value = "mrkaiser"
webDriver.findElementById("credentials").sendKeys("somepassword")
click on ("btnLogin")
//click to pre-paid
click on linkText("Pre-Paid")
memberHelper())
webDriver.quit()
}
}
However when I try to iterate through a list of elements using a foreach and passing in memberHelper, after a list of about 5 elements, I get the following stack trace:
The code passed to eventually never returned normally. Attempted 369 times over 40.110734904 seconds. Last failure message: Index: 0, Size: 0.
ScalaTestFailureLocation: LoadPrePaidSpec at (LoadPrePaidSpec.scala:43)
org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to eventually never returned normally. Attempted 369 times over 40.110734904 seconds. Last failure message: Index: 0, Size: 0.
at org.scalatest.concurrent.Eventually$class.tryTryAgain$1(Eventually.scala:420)
at org.scalatest.concurrent.Eventually$class.eventually(Eventually.scala:438)
at LoadPrePaidSpec.eventually(LoadPrePaidSpec.scala:17)
at LoadPrePaidSpec.memberHelper(LoadPrePaidSpec.scala:43)
at LoadPrePaidSpec$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(LoadPrePaidSpec.scala:70)
at LoadPrePaidSpec$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(LoadPrePaidSpec.scala:70)
at scala.collection.immutable.List.foreach(List.scala:383)
at LoadPrePaidSpec$$anonfun$1.apply$mcV$sp(LoadPrePaidSpec.scala:70)
at LoadPrePaidSpec$$anonfun$1.apply(LoadPrePaidSpec.scala:54)
at LoadPrePaidSpec$$anonfun$1.apply(LoadPrePaidSpec.scala:54)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FlatSpecLike$$anon$1.apply(FlatSpecLike.scala:1647)
at org.scalatest.Suite$class.withFixture(Suite.scala:1122)
at org.scalatest.FlatSpec.withFixture(FlatSpec.scala:1683)
at org.scalatest.FlatSpecLike$class.invokeWithFixture$1(FlatSpecLike.scala:1644)
at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656)
at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FlatSpecLike$class.runTest(FlatSpecLike.scala:1656)
at org.scalatest.FlatSpec.runTest(FlatSpec.scala:1683)
at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714)
at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:383)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:390)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:427)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:383)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FlatSpecLike$class.runTests(FlatSpecLike.scala:1714)
at org.scalatest.FlatSpec.runTests(FlatSpec.scala:1683)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FlatSpec.org$scalatest$FlatSpecLike$$super$run(FlatSpec.scala:1683)
at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760)
at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FlatSpecLike$class.run(FlatSpecLike.scala:1760)
at org.scalatest.FlatSpec.run(FlatSpec.scala:1683)
at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
at scala.collection.immutable.List.foreach(List.scala:383)
at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
at org.scalatest.tools.Runner$.run(Runner.scala:883)
at org.scalatest.tools.Runner.run(Runner.scala)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at LoadPrePaidSpec$$anonfun$memberHelper$1.apply$mcV$sp(LoadPrePaidSpec.scala:45)
at LoadPrePaidSpec$$anonfun$memberHelper$1.apply(LoadPrePaidSpec.scala:43)
at LoadPrePaidSpec$$anonfun$memberHelper$1.apply(LoadPrePaidSpec.scala:43)
at org.scalatest.concurrent.Eventually$class.makeAValiantAttempt$1(Eventually.scala:394)
at org.scalatest.concurrent.Eventually$class.tryTryAgain$1(Eventually.scala:408)
... 63 more
My end goal is to actually test something in the 20K range of elements from a file, but until I can get a small list like this working, I'm up a creek.
I'm using the chromedriver and am on Scala 2.11.6, scala test 2.2.0, selenium 2.35.0.

Scala: for loop: iterating over a a directory stream

Update:
I have written the following updated code after inputs from Scala experts here.
Here below is the updated code.
The code compiles, but on "run" throws an IllegalStateException: I posted the error stacktrace after the code listing:
import java.io.IOException
import java.nio.file.FileSystems
import java.nio.file.FileVisitOption
import java.nio.file.FileVisitResult
import java.nio.file.FileVisitor
import java.nio.file.Files
import java.nio.file.Path
import java.nio.file.Paths
import java.nio.file.attribute.BasicFileAttributes
import java.util.EnumSet
import java.nio.file.{DirectoryStream,DirectoryIteratorException}
import scala.collection.JavaConversions._
class TestFVis(val searchPath: Path) extends FileVisitor[Path] {
//Here I provide implementations for postVisitDirectory,
//preVisitDirectory, visitFile, and visitFileFailed
}
object Main {
def main(args: Array[String]) {
val searchFileOrFolder: Path = Paths.get("C://my_dir")
println("The Path object is: " + searchFileOrFolder)
var testFileVisitorTop = new TestFVis(searchFileOrFolder)
println("Our top level FileVisitor is: " + testFileVisitorTop)
val opts = EnumSet.of(FileVisitOption.FOLLOW_LINKS)
val rootDirsIterable: Iterable[Path] = FileSystems.getDefault.getRootDirectories //returns the default filesystem
// and then returns an Iterable[Path] to iterate over the paths of the root directories
var dirStream:Option[DirectoryStream[Path]] = None
for(rootDir <- rootDirsIterable) {
println("in the Outer For")
dirStream= Some(Files.newDirectoryStream(searchFileOrFolder))
def dstream = dirStream.get
val streamIter = dstream.iterator().filter((path) => {
Files.isRegularFile(path)
})
for( dirStreamUnwrapped <- dirStream;(filePath:Path) <- dirStreamUnwrapped) {
//for( (filePath: DirectoryStream[Path]) <- dirStream) {
val tempPath = Files.walkFileTree(filePath, testFileVisitorTop)
//val tempPath = Files.walkFileTree(fileOrDir,opts,Integer.MAX_VALUE,testFileVisitorTop)
println("current path is: " + tempPath)
if (!testFileVisitorTop.found) {
println("The file or folder " + searchFileOrFolder+ " was not found!")
}
}
}
}
}
However for historical context here is the compile error I got at first:
[error] found : java.nio.file.Path => Unit
[error] required: java.nio.file.DirectoryStream[java.nio.file.Path] => ?
[error] for((filePath:Path) <- dirStream) {
--
After changing the code:
The code compiles with no errors, but I get an IllegalStateException on sbt 'run'\
> run
ur top level FileVisitor is C:\my_dir
Our top level FileVisitor is: com.me.ds.TestFileVisitor
#5564baf6
in the Outer For
[error] (run-main-0) java.lang.IllegalStateException: Iterator already obtained
java.lang.IllegalStateException: Iterator already obtained
at sun.nio.fs.WindowsDirectoryStream.iterator(WindowsDirectoryStream.jav
a:117)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.
scala:54)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.s
cala:777)
at com.me.ds.Main$$anonfun$main$1$$anonfun$appl
y$1.apply(SampleFileVis.scala:76)
at com.me.ds.Main$$anonfun$main$1$$anonfun$appl
y$1.apply(AFileVisitor.scala:76)
at scala.Option.foreach(Option.scala:256)
at **com.me.ds.Main$$anonfun$main$1.apply(SampleFileVisitor.scala:76)**
at com.me.ds.Main$$anonfun$main$1.apply(AFileVi
sitor.scala:68)
at scala.collection.Iterator$class.foreach(Iterator.scala:743)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.me.ds.Main$.main(SampleFileVis.scala:68)
at com.me.ds.Main.main(SampleFileVis.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 0 s, completed Mar 24, 2015 7:32:38 AM
>
=-------------
I am going out and investigating the error on my own also. If someone can point me in the right direction here that would make my code compile and run, I would have accomplished my goal here.
thanks
All you are doing with the first for is unwrapping the Option, which cannot be cast to a Path. You just need to take your unwrapped object and use it in the next part:
for(dirStreamUnwrapped <- dirStream;
(filePath:Path) <- dirStreamUnwrapped) {
val tempPath = Files.walkFileTree(filePath, testFVis)
}
object needs to be lowercase.
dirStream needs to be preceded by val.
you have too many } at the end.
you have new TestFileVisitor but you meant new TestFVis.
dirStream has type: Some[DirectoryStream[Path]], which means that you need (filePath: DirectoryStream[Path]) <- dirStream

Categories

Resources