what is the correct setting for the files core-site.xml and mapred-site.xml for Hadoop?
Because I'm trying to run hadoop but get the following error:
starting secondarynamenode , logging to / opt/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-lbad012.out
lbad012 : Exception in thread “main ” java.lang.IllegalArgumentException : Does not contain a valid host : port authority : file :/ / /
lbad012 : at org.apache.hadoop.net.NetUtils.createSocketAddr ( NetUtils.java : 164 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress ( NameNode.java : 212 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress ( NameNode.java : 244 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress ( NameNode.java : 236 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize ( SecondaryNameNode.java : 194 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode . ( SecondaryNameNode.java : 150 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main ( SecondaryNameNode.java : 676 )
You didn't specify which version of hadoop you're using, or whether or not you're using CDH (cloudera's hadoop distro)
You also didn't specify whether or not you're looking to run in a pseudo-distributed, single node, or distributed cluster setup. These options specifically are set up in the files youre mentioning (core-site and mapred-site)
Hadoop is very finicky so these details are important when asking questions related to hadoop.
Since you didn't specify any of the above though, Im guessing you're a beginner -- in which case this guide should help you (and show you what core-site and mapred-site should look like in a pseudo-distributed configuration)
Anyway, Hadoop has a 'Quick Start' guide for almost every version of hadoop they upload, so find one that relates to the version and setup you're looking for and it should be fairly easy to walk through.
Related
In the experimenter mod in weka I have this configuration :
Results destination : Ex1_1.arff
Experiment type : cross-validation | number of folds : 10 | Classification
Dataset : soybean.arff.gz
Iteration Control : Number of repetitions : 10 | Data sets first
Algorithms : J48 -C 0.25 -M 2
This experience is saved as Ex1_1.xml (saving with .exp give the following error : Couldn't save experiment file : Ex1_1.exp Reason : java.awt.Component$AccessibleAWTComponent$AccessibleAWTComponentHandler)
And when I try to run this experience I have the following error : Problem creating experiment copy to run: java.awt.Component$AccessibleAWTComponent$AccessibleAWTComponentHandler
So it seem I have a problem with something like AWT in java... Do somebody have an idea ?
Thank you !
I am trying to build a Java wrapper around the native SDK and I am rewriting NanoPlayer. I think I managed to get the same flow of events as the native version, but when I play a song, I get a QUEUELIST_NEED_NATURAL_NEXT instead of MEDIASTREAM_DATA_READY. You can see the output below.
What could cause this? And what am I supposed to do on such event?
Thanks a lot in advance.
Stefano
34511:327803 dz_crash_handler: [dz_crash_handler_init:286] Crash
Handler available Device ID: e91f2fce333d4a7ab9b75cfaee3115e4
### MENU
Please press key for comands: - P : PLAY / PAUSE S : START/STOP + : NEXT
: PREVIOUS R : NEXT REPEAT MODE ? : TOGGLE SHUFFLE MODE Q : QUIT [1-4] : LOAD CONTENT [1-4]
#
OnConnectCallback
(native#0x7f1d843271e0,native#0x7f1d200f2a60,native#0x7f1d842c95c0)(App:native#0x7f1d842c95c0:1)
++++ CONNECT_EVENT ++++ USER_OFFLINE_AVAILABLE OnConnectCallback (native#0x7f1d843271e0,native#0x7f1d200eee50,native#0x7f1d842c95c0)(App:native#0x7f1d842c95c0:4)
++++ CONNECT_EVENT ++++ USER_LOGIN_OK LOAD => dzmedia:///track/136332242 (App:native#0x7f1d842c95c0:2) ====
PLAYER_EVENT ==== QUEUELIST_LOADED for idx: 0 Entity: line 1: parser
error : Document is empty sas_noad = true; ^ S PLAY track n° 0 of =>
dzmedia:///track/136332242
PLAY track n° 0 of => dzmedia:///track/136332242
(App:native#0x7f1d842c95c0:7) ==== PLAYER_EVENT ====
QUEUELIST_TRACK_SELECTED for idx: 0 - is_preview:false
canPauseUnpause: true, canSeek: true, numSkipAllowed: 1 now:{...}
(App:native#0x7f1d842c95c0:8) ==== PLAYER_EVENT ====
QUEUELIST_NEED_NATURAL_NEXT for idx: 0 (App:native#0x7f1d842c95c0:11)
==== PLAYER_EVENT ==== UNKNOWN or default
I found the issue: I provided in the config object a wrong cache path value - it must be a directory (existing) while I was setting a file (although existing).
Advise for the beginners: to see some more log, do not call dz_connect_debug_log_disable().
Hope this helps
Stefano
I am selecting certain rdf properties using Apache Marmotta LDPath. The documentation (http://marmotta.apache.org/ldpath/language.html) denotes fn and lmf prefixes are not neccesary explicitly defined.
My code is:
#prefix dc : <http://purl.org/dc/elements/1.1/> ;
id = . :: xsd:string ;
title = dc:title :: xsd:string ;
file = fn:content(.) :: lmf:text_es ;
but I get the next ParseException:
Caused by: org.apache.marmotta.ldpath.parser.ParseException: function with URI http://www.newmedialab.at/lmf/functions/1.0/content does not exist
at org.apache.marmotta.ldpath.parser.LdPathParser.getFunction(LdPathParser.java:213)
at org.apache.marmotta.ldpath.parser.LdPathParser.FunctionSelector(LdPathParser.java:852)
at org.apache.marmotta.ldpath.parser.LdPathParser.AtomicSelector(LdPathParser.java:686)
at org.apache.marmotta.ldpath.parser.LdPathParser.Selector(LdPathParser.java:607)
at org.apache.marmotta.ldpath.parser.LdPathParser.Rule(LdPathParser.java:441)
at org.apache.marmotta.ldpath.parser.LdPathParser.Program(LdPathParser.java:406)
at org.apache.marmotta.ldpath.parser.LdPathParser.parseProgram(LdPathParser.java:112)
at org.apache.marmotta.ldpath.LDPath.programQuery(LDPath.java:235)
... 47 more
Edit
I'm using the LDPath core Fedora Duraspace 4.5.1. My goal is Solr indexing full text of binary resources, anyway to proceed is valid for me.
To whom it need it,
it seems subset Apache Marmotta LDPath library does not support complex functions like fn:, lmf, and others.
For indexing full text of binary resources is necessary to use Apache Tika, for example.
According to this thread, Buck at the moment does not have full multi-dexing support - at least not in the sense of how multi-dexing is being solved with 'official' solutions.
What I'm confused about: is this problem solved if I only go the Exopackage way? It'd be still ok for me to produce release build with Gradle (slow), and do day-to-day development with Buck's Exopackage solution.
I get that Exopackge will result in a single, main shell .dex containing the loading code for the secondary dexes. But does the Exopackage build produce multiple secondary .dex files, or only a single one (which will hit the 65k method count limit again)?
Buck does support multi-dex which you setup with Exopackage (I guess you could call Exopackage and extension to buck). This lets you go past the 65k limit. My project had more than 65k and it works just fine with Buck + Exopackage.
Here are my binary params when using Exopackage
ANDROID_BINARY_PARAMS = {
'name' : 'pumpup',
'linear_alloc_hard_limit' : 16 * 1024 * 1024,
'use_linear_alloc_split_dex' : True,
'manifest' : 'AndroidManifest.xml',
'keystore' : ':debug_keystore',
'use_split_dex' : True,
'exopackage_modes' : ['secondary_dex'],
'primary_dex_patterns' : [
'^co/pumpup/app/AppShell^',
'^co/pumpup/app/BuildConfig^',
'^com/facebook/buck/android/support/exopackage/',
],
'deps': [
':main-lib',
':application-lib',
],
}
Notice the use_split_dex = True?
So you'll be fine!
I have a tutorial on setting up Buck here:
Buck Tutorial
P.S. Make sure you install watchman for the best speeds
Short: I'd like to know the name of this format!
I would like to know if this is a special common format or just a simple self-generated config file:
scenes : {
Scene : {
class : Scene
sources : {
Game Capture : {
render : 1
class : GraphicsCapture
data : {
window : "[duke3d]: Duke Nukem 3D Atomic Edition 1.4.3 STABLE"
windowClass : SDL_app
executable : duke3d.exe
stretchImage : 0
alphaBlend : 0
ignoreAspect : 0
captureMouse : 1
invertMouse : 0
safeHook : 0
useHotkey : 0
hotkey : 123
gamma : 100
}
cx : 1920
cy : 1080
}
}
}
}
My background is, that I would like to read multiple files like this one above. And I don't want to implement a whole new parser for this. That's why I want to fall back on java libraries which have already implemented this feature. But without being aware of such code formats, it's quite difficult to search for this libraries.
// additional info
This is a config file or a "scene file" for Open Broadcaster Software.
Filename extension is .xconfig
This appears to be a config file or a "scene file" for Open Broadcaster Software.
When used with OBS it has a extension of .xconfig
Hope this helps.
-Yang
I got some feedback from the main developer of this files.
As i thought, this is not a know format - just a simple config file.
solved!