problem when creating audio context in LWJGL - java

When I try to create audio context in LWJGL 3.3.1 I get this error:
[ALSOFT] (EE) Failed to set real-time priority for thread: Operation not permitted (1)
here is the code:
String defaultDeviceName = ALC10.alcGetString(0, ALC10.ALC_DEFAULT_DEVICE_SPECIFIER);
audioDevice = ALC10.alcOpenDevice(defaultDeviceName);
audioContext = ALC10.alcCreateContext(audioContext, (IntBuffer)null);
ALC10.alcMakeContextCurrent(audioContext);
ALCCapabilities alcCapabilities = ALC.createCapabilities(audioDevice);
ALCapabilities alCapabilities = AL.createCapabilities(alcCapabilities);
I would be grateful for any advice.

Related

about oshi-core, I use m1(macbook air)

I have two projects, one project use oshi to get my cpu infomation, and it works well.
public class MyTest {
public static void main(String[] args) {
SystemInfo si = new SystemInfo();
HardwareAbstractionLayer hal = si.getHardware();
CentralProcessor cpu = hal.getProcessor();
System.out.println("cpu.getLogicalProcessorCount(): " + cpu.getLogicalProcessorCount());
System.out.println("cpu.getCurrentFreq(): " + cpu.getCurrentFreq());
System.out.println("hal.getMemory(): " + hal.getMemory());
}
But the other project use oshi to get my cpu info, almost the same code, but it throws a exception.
Exception in thread "main" java.lang.UnsatisfiedLinkError: /Users/herrhu/Library/Caches/JNA/temp/jna7537516652503870093.tmp: dlopen(/Users/herrhu/Library/Caches/JNA/temp/jna7537516652503870093.tmp, 1): no suitable image found. Did find:
/Users/herrhu/Library/Caches/JNA/temp/jna7537516652503870093.tmp: no matching architecture in universal wrapper
/Users/herrhu/Library/Caches/JNA/temp/jna7537516652503870093.tmp: no matching architecture in universal wrapper
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1950)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1832)
at java.lang.Runtime.load0(Runtime.java:811)
at java.lang.System.load(System.java:1088)
at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:1018)
at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:988)
at com.sun.jna.Native.<clinit>(Native.java:195)
at com.sun.jna.platform.mac.SystemB.<clinit>(SystemB.java:40)
at oshi.util.platform.mac.SysctlUtil.sysctl(SysctlUtil.java:61)
at oshi.hardware.platform.mac.MacCentralProcessor.initProcessorCounts(MacCentralProcessor.java:128)
at oshi.hardware.common.AbstractCentralProcessor.<init>(AbstractCentralProcessor.java:74)
at oshi.hardware.platform.mac.MacCentralProcessor.<init>(MacCentralProcessor.java:61)
at oshi.hardware.platform.mac.MacHardwareAbstractionLayer.createProcessor(MacHardwareAbstractionLayer.java:60)
at oshi.util.Memoizer$1.get(Memoizer.java:87)
at oshi.hardware.common.AbstractHardwareAbstractionLayer.getProcessor(AbstractHardwareAbstractionLayer.java:68)
at com.clougence.cloudcanal.sidecar.shell.OshiTest2.main(OshiTest2.java:28)
Process finished with exit code 1
I don't know why the same code in diffrent project has different results. And the dependencies in two projects is same.
enter image description here

Exception in thread "main" java.lang.StackOverflowError while running linear svm with One-vs-Rest model

I am trying to train the model using linear SVM. I am having nearly 300 classes but need to run it with SVM so I used the one vs rest(OvR) approach. I am using apache spark with java. While running it with OvR I am getting the bellow error :
Exception in thread "main" java.lang.StackOverflowError
at scala.collection.TraversableLike$class.builder$1(TraversableLike.scala:229)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:92)
at scala.collection.AbstractSet.map(Set.scala:47)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.foreach(AttributeSet.scala:120)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.to(TraversableLike.scala:590)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.to(AttributeSet.scala:60)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.toBuffer(AttributeSet.scala:60)
at scala.collection.TraversableLike$class.toStream(TraversableLike.scala:585)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.toStream(AttributeSet.scala:60)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream.append(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream.append(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.generic.Growable$class.loop$1(Growable.scala:54)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:57)
at scala.collection.mutable.SetBuilder.$plus$plus$eq(SetBuilder.scala:20)
at scala.collection.TraversableLike$class.to(TraversableLike.scala:590)
at scala.collection.AbstractTraversable.to(Traversable.scala:104)
at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:304)
at scala.collection.AbstractTraversable.toSet(Traversable.scala:104)
at org.apache.spark.sql.catalyst.expressions.AttributeSet$.apply(AttributeSet.scala:45)
Here is the code I used:
LinearSVC sv = new LinearSVC()
.setMaxIter(10)
.setRegParam(0.001)
.setLabelCol(LABEL)
.setFeaturesCol("features");
OneVsRest ovr = new OneVsRest().setClassifier(sv);
Pipeline pipeline = new Pipeline()
.setStages(new PipelineStage[] {
labelIndexer,
tokenizer,
numberNormalizer,
remover,
wordVectorizer,
idf,
ovr,
deIndexer
});
long trainStart = System.currentTimeMillis();
// Fit the pipeline to training documents.
PipelineModel model = pipeline.fit(training);

java.lang.IllegalStateException: A connection to a distributed system already exists in this VM. It has the following configuration:

enter image description here public class GemfireTest {
public static void main(String[] args) throws NameResolutionException, TypeMismatchException, QueryInvocationTargetException, FunctionDomainException {
ServerLauncher serverLauncher = new ServerLauncher.Builder()
.setMemberName("server1")
.setServerPort(40404)
.set("start-locator", "127.0.0.1[9090]")
.build();
serverLauncher.start();
String queryString = "SELECT * FROM /gemregion";
ClientCache cache = new ClientCacheFactory().create();
QueryService queryService = cache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults)query.execute();
int size = results.size();
System.out.println(size);
}
}
trying to run a locator and a server inside my java application getting an exception below:
Exception in thread "main" java.lang.IllegalStateException: A
connection to a distributed system already exists in this VM. It has
the following configuration: ack-severe-alert-threshold="0"
ack-wait-threshold="15" archive-disk-space-limit="0"
archive-file-size-limit="0" async-distribution-timeout="0"
async-max-queue-size="8" async-queue-timeout="60000"
bind-address="" cache-xml-file="cache.xml"
cluster-configuration-dir="" cluster-ssl-ciphers="any"
cluster-ssl-enabled="false" cluster-ssl-keystore=""
cluster-ssl-keystore-password="" cluster-ssl-keystore-type=""
cluster-ssl-protocols="any"
cluster-ssl-require-authentication="true" cluster-ssl-truststore=""
cluster-ssl-truststore-password="" conflate-events="server"
conserve-sockets="true" delta-propagation="true"
deploy-working-dir="C:\Users\Saranya\IdeaProjects\Gemfire"
disable-auto-reconnect="false" disable-tcp="false"
distributed-system-id="-1" distributed-transactions="false"
durable-client-id="" durable-client-timeout="300"
enable-cluster-configuration="true"
enable-network-partition-detection="true"
enable-time-statistics="false" enforce-unique-host="false"
gateway-ssl-ciphers="any" gateway-ssl-enabled="false"
gateway-ssl-keystore="" gateway-ssl-keystore-password=""
gateway-ssl-keystore-type="" gateway-ssl-protocols="any"
gateway-ssl-require-authentication="true" gateway-ssl-truststore=""
gateway-ssl-truststore-password="" groups=""
http-service-bind-address="" http-service-port="7070"
http-service-ssl-ciphers="any" http-service-ssl-enabled="false"
http-service-ssl-keystore="" http-service-ssl-keystore-password=""
http-service-ssl-keystore-type="" http-service-ssl-protocols="any"
http-service-ssl-require-authentication="false"
http-service-ssl-truststore=""
http-service-ssl-truststore-password="" jmx-manager="false"
jmx-manager-access-file="" jmx-manager-bind-address=""
jmx-manager-hostname-for-clients="" jmx-manager-http-port="7070"
jmx-manager-password-file="" jmx-manager-port="1099"
jmx-manager-ssl-ciphers="any" jmx-manager-ssl-enabled="false"
jmx-manager-ssl-keystore="" jmx-manager-ssl-keystore-password=""
jmx-manager-ssl-keystore-type="" jmx-manager-ssl-protocols="any"
jmx-manager-ssl-require-authentication="true"
jmx-manager-ssl-truststore="" jmx-manager-ssl-truststore-password=""
jmx-manager-start="false" jmx-manager-update-rate="2000"
load-cluster-configuration-from-dir="false" locator-wait-time="0"
locators="127.0.0.1[9090]" (wanted "") lock-memory="false"
log-disk-space-limit="0"
log-file="C:\Users\Saranya\IdeaProjects\Gemfire\server1.log"
(wanted "") log-file-size-limit="0" log-level="config" max-num-reconnect-tries="3" max-wait-time-reconnect="60000"
mcast-address="/239.192.81.1" mcast-flow-control="1048576, 0.25,
5000" mcast-port="0" mcast-recv-buffer-size="1048576"
mcast-send-buffer-size="65535" mcast-ttl="32"
member-timeout="5000" membership-port-range="[1024,65535]"
memcached-bind-address="" memcached-port="0"
memcached-protocol="ASCII" name="server1" (wanted "")
off-heap-memory-size="" redis-bind-address="" redis-password=""
redis-port="0" redundancy-zone="" remote-locators=""
remove-unresponsive-client="false" roles=""
security-client-accessor="" security-client-accessor-pp=""
security-client-auth-init="" security-client-authenticator=""
security-client-dhalgo="" security-log-file=""
security-log-level="config" security-manager=""
security-peer-auth-init="" security-peer-authenticator=""
security-peer-verifymember-timeout="1000" security-post-processor=""
security-shiro-init="" security-udp-dhalgo=""
serializable-object-filter="!" server-bind-address=""
server-ssl-ciphers="any" server-ssl-enabled="false"
server-ssl-keystore="" server-ssl-keystore-password=""
server-ssl-keystore-type="" server-ssl-protocols="any"
server-ssl-require-authentication="true" server-ssl-truststore=""
server-ssl-truststore-password="" socket-buffer-size="32768"
socket-lease-time="60000" ssl-ciphers="any" ssl-cluster-alias=""
ssl-default-alias="" ssl-enabled-components="[]"
ssl-gateway-alias="" ssl-jmx-alias="" ssl-keystore=""
ssl-keystore-password="" ssl-keystore-type="" ssl-locator-alias=""
ssl-protocols="any" ssl-require-authentication="true"
ssl-server-alias="" ssl-truststore="" ssl-truststore-password=""
ssl-truststore-type="" ssl-web-alias=""
ssl-web-require-authentication="false" start-dev-rest-api="false"
start-locator="127.0.0.1[9090]" (wanted "")*
statistic-archive-file="" statistic-sample-rate="1000"
statistic-sampling-enabled="true" tcp-port="0"
udp-fragment-size="60000" udp-recv-buffer-size="1048576"
udp-send-buffer-size="65535" use-cluster-configuration="true"
user-command-packages="" validate-serializable-objects="false"
at
org.apache.geode.distributed.internal.InternalDistributedSystem.validateSameProperties(InternalDistributedSystem.java:2959)
at
org.apache.geode.distributed.DistributedSystem.connect(DistributedSystem.java:199)
at
org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:243)
at
org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:214)
at GemfireTest.main(GemfireTest.java:61)
How to solve this exception?
The error here is pretty self explanatory: you can’t have more than one connection to a distributed system within a single JVM. In this particular case you’re starting both a server cache (ServerLauncher) and a client cache (ClientCacheFactory) within the same JVM, which is not supported.
To solve the issue, use two different applications or JVMs, one for the server and another one for the client executing the query.
Cheers.

How can I remove the output layer of pre-trained model with TensorFlow java api?

I have the pre-trained model like Inception-v3. I want to remove the output layer and use it in image cognition. Here is the example given by tensorflow:
Just like the python framework Keras, it has a method like model.layers.pop(). I tried do it with tensorflow java api. First I tried to use dl4j, but when I imported the keras model, I got an error like this:
2017-06-15 21:15:43 INFO KerasInceptionV3Net:52 - Importing Inception model from data/inception-model.json
2017-06-15 21:15:43 INFO KerasInceptionV3Net:53 - Importing Weights model from data/inception_v3_complete
Exception in thread "main" java.lang.RuntimeException: Unknown exception.
at org.bytedeco.javacpp.hdf5$H5File.allocate(Native Method)
at org.bytedeco.javacpp.hdf5$H5File.<init>(hdf5.java:12713)
at org.deeplearning4j.nn.modelimport.keras.Hdf5Archive.<init>(Hdf5Archive.java:61)
at org.deeplearning4j.nn.modelimport.keras.KerasModel$ModelBuilder.weightsHdf5Filename(KerasModel.java:603)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasModelAndWeights(KerasModelImport.java:176)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.<init>(KerasInceptionV3Net.java:55)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.main(KerasInceptionV3Net.java:108)
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) thread 0:
#000: C:\autotest\HDF5110ReleaseRWDITAR\src\H5F.c line 579 in H5Fopen(): unable to open file
major: File accessibilty
minor: Unable to open file
#001: C:\autotest\HDF5110ReleaseRWDITAR\src\H5Fint.c line 1100 in H5F_open(): unable to open file: time = Thu Jun 15 21:15:44 2017,name = 'data/inception_v3_complete', tent_flags = 0
major: File accessibilty
minor: Unable to open file
#002: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FD.c line 812 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FDsec2.c line 348 in H5FD_sec2_open(): unable to open file: name = 'data/inception_v3_complete', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0
major: File accessibilty
minor: Unable to open file
So I went back to tensorflow. I'm going to modify the model in keras and convert the model to tensor. Here is my conversion script:
input_fld = './'
output_node_names_of_input_network = ["pred0"]
write_graph_def_ascii_flag = True
output_node_names_of_final_network = 'output_node'
output_graph_name = 'test2.pb'
from keras.models import load_model
import tensorflow as tf
import os
import os.path as osp
from keras.applications.inception_v3 import InceptionV3
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
output_fld = input_fld + 'tensorflow_model/'
if not os.path.isdir(output_fld):
os.mkdir(output_fld)
net_model = InceptionV3(weights='imagenet', include_top=True)
num_output = len(output_node_names_of_input_network)
pred = [None]*num_output
pred_node_names = [None]*num_output
for i in range(num_output):
pred_node_names[i] = output_node_names_of_final_network+str(i)
pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])
print('output nodes names are: ', pred_node_names)
from keras import backend as K
sess = K.get_session()
if write_graph_def_ascii_flag:
f = 'only_the_graph_def.pb.ascii'
tf.train.write_graph(sess.graph.as_graph_def(), output_fld, f, as_text=True)
print('saved the graph definition in ascii format at: ', osp.join(output_fld, f))
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_t ext=False)
print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))
I got the model as .pb file, but when I put it into the tensor example, The LabelImage example, I got this error:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:285)
at org.tensorflow.Session$Runner.run(Session.java:235)
at com.dlut.cmh.sheng.LabelImage.executeInceptionGraph(LabelImage.java:98)
at com.dlut.cmh.sheng.LabelImage.main(LabelImage.java:51)
I don't know how to solve this. Can anyone help me? Or you have another way to do this?
The error message you get from the TensorFlow Java API:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
suggests that the model is constructed in a way that requires you to feed a boolean value for the tensor named batch_normalization_1/keras_learning_phase.
So, you'd have to include that in your call to run by changing:
try (Session s = new Session(g);
Tensor result = s.runner().feed("input",image).fetch("output").run().get(0)) {
to something like:
try (Session s = new Session(g);
Tensor learning_phase = Tensor.create(false);
Tensor result = s.runner().feed("input", image).feed("batch_normalization_1/keras_learning_phase", learning_phase).fetch("output").run().get(0)) {
The names of nodes you feed and fetch depend on the model, so it's possible that the names of the 'input' and 'output' nodes are different as well.
You might also want to consider using the TensorFlow SavedModel format (see also https://github.com/tensorflow/serving/issues/310#issuecomment-297015251)
Hope that helps

I am getting an error when using flume

I am using the below configuration
ClouderaTwitterAgent.sources = Twitter
ClouderaTwitterAgent.channels = MemChannel
ClouderaTwitterAgent.sinks = HDFS
ClouderaTwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
ClouderaTwitterAgent.sources.Twitter.channels = MemChannel
ClouderaTwitterAgent.sources.Twitter.consumerKey = xxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.consumerSecret = xxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.accessToken = xxxxxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.accessTokenSecret = xxxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.keywords = Sully
ClouderaTwitterAgent.sinks.HDFS.channel = MemChannel
ClouderaTwitterAgent.sinks.HDFS.type = hdfs
ClouderaTwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/tweets
ClouderaTwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
ClouderaTwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
ClouderaTwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
ClouderaTwitterAgent.sinks.HDFS.hdfs.rollSize = 0
ClouderaTwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
ClouderaTwitterAgent.channels.MemChannel.type = memory
ClouderaTwitterAgent.channels.MemChannel.capacity = 10000
ClouderaTwitterAgent.channels.MemChannel.transactionCapacity = 100
and this is the command that I am using to run flume
`bin/flume-ng agent --conf ./conf/ -f conf/flume-cloudera.conf -Dflume.root.logger=DEBUG,console -n ClouderaTwitterAgent`
and this is the error that I am getting
2016-09-20 14:53:14,245 (Twitter4J Async Dispatcher[0]) [DEBUG - com.cloudera.flume.source.TwitterSource$1.onStatus(TwitterSource.java:121)] tweet arrived
2016-09-20 14:53:16,073 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://localhost:9000/user/tweets/FlumeData.1474363316543.tmp
2016-09-20 14:53:16,113 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)] process failed
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:232)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:718)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:703)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:605)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2554)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2546)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2412)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
... 21 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.JniBasedUnixGroupsMapping
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
... 26 more
2016-09-20 14:53:16,121 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR-org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
Can anyone please help me. I am new to this and I got this configuration from the internet and I followed every instruction.
I even searched for any solution to this problem.
I will list out what I have done
1.I checked my system clock
2.removed the hdfs folder and then made a new folder
3.formatted the namenode
4.restarted the agent several times

Categories

Resources