Short: I'd like to know the name of this format!
I would like to know if this is a special common format or just a simple self-generated config file:
scenes : {
Scene : {
class : Scene
sources : {
Game Capture : {
render : 1
class : GraphicsCapture
data : {
window : "[duke3d]: Duke Nukem 3D Atomic Edition 1.4.3 STABLE"
windowClass : SDL_app
executable : duke3d.exe
stretchImage : 0
alphaBlend : 0
ignoreAspect : 0
captureMouse : 1
invertMouse : 0
safeHook : 0
useHotkey : 0
hotkey : 123
gamma : 100
}
cx : 1920
cy : 1080
}
}
}
}
My background is, that I would like to read multiple files like this one above. And I don't want to implement a whole new parser for this. That's why I want to fall back on java libraries which have already implemented this feature. But without being aware of such code formats, it's quite difficult to search for this libraries.
// additional info
This is a config file or a "scene file" for Open Broadcaster Software.
Filename extension is .xconfig
This appears to be a config file or a "scene file" for Open Broadcaster Software.
When used with OBS it has a extension of .xconfig
Hope this helps.
-Yang
I got some feedback from the main developer of this files.
As i thought, this is not a know format - just a simple config file.
solved!
Related
In the experimenter mod in weka I have this configuration :
Results destination : Ex1_1.arff
Experiment type : cross-validation | number of folds : 10 | Classification
Dataset : soybean.arff.gz
Iteration Control : Number of repetitions : 10 | Data sets first
Algorithms : J48 -C 0.25 -M 2
This experience is saved as Ex1_1.xml (saving with .exp give the following error : Couldn't save experiment file : Ex1_1.exp Reason : java.awt.Component$AccessibleAWTComponent$AccessibleAWTComponentHandler)
And when I try to run this experience I have the following error : Problem creating experiment copy to run: java.awt.Component$AccessibleAWTComponent$AccessibleAWTComponentHandler
So it seem I have a problem with something like AWT in java... Do somebody have an idea ?
Thank you !
According to this thread, Buck at the moment does not have full multi-dexing support - at least not in the sense of how multi-dexing is being solved with 'official' solutions.
What I'm confused about: is this problem solved if I only go the Exopackage way? It'd be still ok for me to produce release build with Gradle (slow), and do day-to-day development with Buck's Exopackage solution.
I get that Exopackge will result in a single, main shell .dex containing the loading code for the secondary dexes. But does the Exopackage build produce multiple secondary .dex files, or only a single one (which will hit the 65k method count limit again)?
Buck does support multi-dex which you setup with Exopackage (I guess you could call Exopackage and extension to buck). This lets you go past the 65k limit. My project had more than 65k and it works just fine with Buck + Exopackage.
Here are my binary params when using Exopackage
ANDROID_BINARY_PARAMS = {
'name' : 'pumpup',
'linear_alloc_hard_limit' : 16 * 1024 * 1024,
'use_linear_alloc_split_dex' : True,
'manifest' : 'AndroidManifest.xml',
'keystore' : ':debug_keystore',
'use_split_dex' : True,
'exopackage_modes' : ['secondary_dex'],
'primary_dex_patterns' : [
'^co/pumpup/app/AppShell^',
'^co/pumpup/app/BuildConfig^',
'^com/facebook/buck/android/support/exopackage/',
],
'deps': [
':main-lib',
':application-lib',
],
}
Notice the use_split_dex = True?
So you'll be fine!
I have a tutorial on setting up Buck here:
Buck Tutorial
P.S. Make sure you install watchman for the best speeds
what is the correct setting for the files core-site.xml and mapred-site.xml for Hadoop?
Because I'm trying to run hadoop but get the following error:
starting secondarynamenode , logging to / opt/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-lbad012.out
lbad012 : Exception in thread “main ” java.lang.IllegalArgumentException : Does not contain a valid host : port authority : file :/ / /
lbad012 : at org.apache.hadoop.net.NetUtils.createSocketAddr ( NetUtils.java : 164 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress ( NameNode.java : 212 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress ( NameNode.java : 244 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress ( NameNode.java : 236 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize ( SecondaryNameNode.java : 194 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode . ( SecondaryNameNode.java : 150 )
lbad012 : at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main ( SecondaryNameNode.java : 676 )
You didn't specify which version of hadoop you're using, or whether or not you're using CDH (cloudera's hadoop distro)
You also didn't specify whether or not you're looking to run in a pseudo-distributed, single node, or distributed cluster setup. These options specifically are set up in the files youre mentioning (core-site and mapred-site)
Hadoop is very finicky so these details are important when asking questions related to hadoop.
Since you didn't specify any of the above though, Im guessing you're a beginner -- in which case this guide should help you (and show you what core-site and mapred-site should look like in a pseudo-distributed configuration)
Anyway, Hadoop has a 'Quick Start' guide for almost every version of hadoop they upload, so find one that relates to the version and setup you're looking for and it should be fairly easy to walk through.
Is there a Java library similar to libconfig for C++, where the config file is stored in a JSON-like format that can be edited by humans, and later read from the program?
I don't want to use Spring or any of the larger frameworks. What I'm looking for is a small, fast, self-contained library. I looked at java.util.Properties, but it doesn't seem to support hierarchical/nested config data.
I think https://github.com/typesafehub/config is exactly what you are looking for. The format is called HOCON for Human-Optimized Config Object Notation and it a super-set of JSON.
Examples of HOCON:
HOCON that is also valid JSON:
{
"foo" : {
"bar" : 10,
"baz" : 12
}
}
HOCON also supports standard properties format, so the following is valid as well:
foo.bar=10
foo.baz=12
One of the features I find very useful is inheritance, this allows you to layer configurations. For instance a library would have a reference.conf, and the application using the library would have an application.conf. The settings in the application.conf will override the defaults in reference.conf.
Standard Behavior for loading configs:
The convenience method ConfigFactory.load() loads the following
(first-listed are higher priority):
system properties application.conf (all resources on classpath with
this name)
application.json (all resources on classpath with this
name)
application.properties (all resources on classpath with this
name)
reference.conf (all resources on classpath with this name)
I found this HOCON example:
my.organization {
project {
name = "DeathStar"
description = ${my.organization.project.name} "is a tool to take control over whole world. By world I mean couch, computer and fridge ;)"
}
team {
members = [
"Aneta"
"Kamil"
"Lukasz"
"Marcin"
]
}
}
my.organization.team.avgAge = 26
to read values:
val config = ConfigFactory.load()
config.getString("my.organization.project.name") // => DeathStar
config.getString("my.organization.project.description") // => DeathStar is a tool to take control over whole world. By world I mean couch, computer and fridge ;)
config.getInt("my.organization.team.avgAge") // => 26
config.getStringList("my.organization.team.members") // => [Aneta, Kamil, Lukasz, Marcin]
Reference: marcinkubala.wordpress.com
Apache Commons Configuration API and Constretto seem to be somewhat popular and support multiple formats (no JSON mentioned, though). I've personally never tried either, so YMMV.
There's a Java library to handle JSON files if that's what you're looking for:
http://www.json.org/java/index.html
Check out other tools on the main page:
http://json.org/
I'm starting to build a record keeping database for the documents we manage on our system. Each document goes through a bunch of specific processing tasks that I will call here normalization, conversion and extraction.
The document processing may fail at any of these steps, so, I'm looking for a solution where i can quickly store this information for archiving but I should also be able to query the information (and possibly summarize it). If I would define my data structure in json it would possibly look like this:
{ 10123 : [
{ queue : 'converter',
startedAt : 'date-here',
finishedAt: 'date-here',
error : { message : 'error message', stackTrace : 'stack trace here' },
machine : '192.168.0.1'
} ,
{ queue : 'extractor',
startedAt : 'date-here',
finishedAt: 'date-here',
error : { message : 'error message', stackTrace : 'stack trace here' },
machine : '192.168.0.1'
},
{ queue : 'extractor',
startedAt : 'date-here',
finishedAt: 'date-here',
error : { message : 'error message', stackTrace : 'stack trace here' },
machine : '192.168.0.1'
},
] }
In an ideal world I would have the full processing life information from a specific document and should also be able to detect wich ones have failed and the average time each process takes.
Any hints on an ideal database solution to handle this? This would possibly go for a couple of thousands writes a day.
The main solution is written in Java, so the DB should have a Java driver.
Mongodb is a right choice for this since it supports all your expected features out of the box
documents/embedded documents
json compatible
support querying (of course except joins)
super fast
java driver supported by 10gen
check out mongodb use cases for more info