I have a simple requirement of converting input JSON to flat file in Mule 4 but I am unable to find any solid examples online. I started of creating sample schema as follows but it's not working.
test.ffd schema:
form: FLATFILE
id: 'test'
tag: '1'
name: Request Header Record
values:
- { name: 'aa', type: String, length: 10 }
- { name: 'bb', type: String, length: 8 }
- { name: 'cc', type: String, length: 4 }
dataweave:
%dw 2.0
output application/flatfile schemaPath='test.ffd'
---
{
aa : payload.a,
bb : payload.b,
cc : payload.c
}
Input JSON:
{
"a": "xxx",
"b": "yyy",
"c": "zzz"
}
But it fails saying
Message : "java.lang.IllegalStateException - Need to specify structureIdent or schemaIdent in writer configuration, while writing FlatFile at
4| {
| ...
8| }
How do I do this correctly?
Error message tells you what is missed.
Need to specify structureIdent or schemaIdent in writer configuration
Add one of them and it flatfile or fixedwidth should work fine.
For example, add segmentIdent:
%dw 2.0
output application/flatfile schemaPath = "test1.ffd",
segmentIdent = "test1"
---
payload map (a, index) -> {
aa: a.a,
bb: a.b,
cc: a.c
}
Here is example how to use FIXEDWIDTH properly https://simpleflatservice.com/mule4/FixedWidthSchemaTransformation.html
Assuming you are trying to output a fixed width file, which it looks like you are, change
form: FLATFILE
to
form: FIXEDWIDTH
Keep in mind using this FFD will only work if you have a single structure. You could pass in:
payload map {
aa: $.a,
...
}
If you had a set and it would still work, but if you needed multiple structures you won't be able to use the shorthand schema.
And to explain why you were getting this error, take a look at these docs, reading "Writer properties (for Flat File)":
https://docs.mulesoft.com/mule-runtime/4.2/dataweave-formats#writer_properties_flat_file
Related
I have a HOCON config file, something like
foo {
[
id: 1
f: abc
],
[
id: 2
f: [xyz , pqr]
]
}
At the backend, I want the field f as a Java list. So wherever the field f is a string, I should be able to convert it to a List. config.resolve() doesn't seem to work here and I need a custom wrapper on top it which I'm unable to think of. Is there any way by which this could be achieved ?
I am trying to import some JSON data into my Elasticsearch and Kibana cluster using logstash and its configuration. I am using a JSON file having three fields.
elasticsearch version used: 6.5.3
logstash version used: 6.5.3
Logstash version used: 6.5.3
Sample JSON file: test.json
{"name":"Jonathan","score":"9.9","address":"New Delhi"}
{"name":"Sam","score":"8.9","address":"New York"}
{"name":"Michelle","score":"9.0","address":"California"}
My configuration file: test.config
input{
file{
path => "/Users/amit/elasticsearch/data/test.json"
codec => json
sincedb_path => "/dev/null"
start_position => "beginning"
}
}
filter{
json{
source => "message"
}
mutate{
convert => {
"name" => "text"
"score" => "float"
"address" => "text"
}
}
}
output{
elasticsearch{
hosts => "localhost:9200"
index => "test"
}
stdout { codec => rubydebug }
}
I am trying to import this data into elasticsearch using logstash using the following command:
bin/logstash -f ../../data/test.config
But I get the following error message:
[2018-12-27T20:18:41,439][ERROR][logstash.pipeline] Error registering
plugin {:pipeline_id=>"main",
:plugin=>"#,
#filter={\"name\"=>\"text\",
\"score\"=>\"float\", \"address\"=>\"text\"},
id=>\"4a292b8b637c63de89c36b730212b3c706307f5fd385080369ac0cbeac3c2d53\", enable_metric=>true, periodic_flush=>false>>", :error=>"translation
missing: en.logstash.agent.configuration.invalid_plugin_register",
:thread=>"#"}
[2018-12-27T20:18:41,452][ERROR][logstash.pipeline] Pipeline aborted
due to error {:pipeline_id=>"main",
:exception=>#,
:backtrace=>["/Users/amit/elasticsearch/logstash/logstash-6.5.3/vendor/bundle/jruby/2.3.0/gems/logstash-filter-mutate-3.3.4/lib/logstash/filters/mutate.rb:219:in
block in register'", "org/jruby/RubyHash.java:1343:ineach'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/vendor/bundle/jruby/2.3.0/gems/logstash-filter-mutate-3.3.4/lib/logstash/filters/mutate.rb:217:in
register'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:242:in
register_plugin'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:253:in
block in register_plugins'", "org/jruby/RubyArray.java:1734:in
each'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:253:in
register_plugins'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:595:in
maybe_setup_out_plugins'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:263:in
start_workers'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:200:in
run'",
"/Users/amit/elasticsearch/logstash/logstash-6.5.3/logstash-core/lib/logstash/pipeline.rb:160:in
`block in start'"], :thread=>"#"}
[2018-12-27T20:18:41,474][ERROR][logstash.agent] Failed to execute
action {:id=>:main,
:action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could
not execute action: PipelineAction::Create, action_result:
false", :backtrace=>nil}
[2018-12-27T20:18:41,705][INFO ][logstash.agent] Successfully started
Logstash API endpoint {:port=>9600}
Also, if I remove the mutate filter from the file test.config, it works fine. But, I want to change the type of score variable to float. Is there a problem in trying to mutate the fields during parsing or I am missing something else? Thanks :)
https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-convert
Seems you cant use "text", use "name" => "string"
I am trying to make swagger document my API composed of Jersey-spring 2.22.2 with Spring 4.3 and Jackson 2.22.2.
The swagger package I'm using is:
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jersey2-jaxrs</artifactId>
<scope>compile</scope>
<version>1.5.12</version>
</dependency>
one of endpoint declaration:
#POST
#ApiOperation(
value = "creates folder hierarchy type client|lead",
notes = "creates folder hierarchy type client|lead"
)
#ApiResponses(value = {
#ApiResponse(code = 200, message = "creation successfull")
})
#Path("create_type")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response createHierarchy(
#ApiParam(value = "hierarchy type", required = true) #NotNull #FormDataParam("type") EHierarchyType hierarchyType,
#ApiParam(value = "parametric part of the hierarchy", required = true) #NotNull #FormDataParam("params") Map<String, Folder2> folderMap
) throws ItemExistsException, AccessDeniedException, PathNotFoundException, WebserviceException, RepositoryException, DatabaseException, ExtensionException, AutomationException, UnknowException, IOException, UserQuotaExceededException, LockException, VersionException {
StopWatch stopWatch = new StopWatch();
folderCtrl.createHierarchy(folderMap, hierarchyType);
logger.info("create hierarchy took: " + stopWatch.getElapsedTime());
return Response.ok().build();
}
and this is how the generated json looks like for this endpoint:
"/folder/create_type" : {
"post" : {
"tags" : [ "folder" ],
"summary" : "creates folder hierarchy type client|lead",
"description" : "creates folder hierarchy type client|lead",
"operationId" : "createHierarchy",
"consumes" : [ "multipart/form-data" ],
"parameters" : [ {
"name" : "type",
"in" : "formData",
"description" : "hierarchy type",
"required" : true,
"type" : "string",
"enum" : [ "CLIENT", "LEAD" ]
}, {
"name" : "params",
"in" : "formData",
"description" : "parametric part of the hierarchy",
"required" : true,
"type" : "object"
} ],
"responses" : {
"200" : {
"description" : "creation successfull"
}
}
}
}
when I try to parse this output in swagger editor it returns error back, and I think the reason might be that in "paramas" names parameter it has created its type of object instead of schema. My point here is to find out why? Is it some bug in swagger or it's me that missed something?
Also, on the another endpoint I have, there is #FormDataParam that is an pojo model object annotated with #ApiModel. This is translated by swagger as of type 'ref' but it doesn't gives user any other clue of what this object is or which fields it should contain. In Swagger-UI I see just 'undefined' as param type. This is not much informing. What I need to do in order to see the object's structure and to supply it's json definition as an example to try in ui?
Thanks
This answer contains examples of how the final Swagger spec should look like, but I don't know how to express that using Swagger #annotations. Hope this gives you some ideas anyway.
In Swagger 2.0, there is no straightforward way to have file + object in request body – form parameters can be primitive values, arrays and files but not objects, and body parameters support objects but not files (although you can try representing files as type: string – more on that below).
The next version, OpenAPI Specification 3.0 (which is RC at the time of writing) will support request body containing files + objects – check this example. I assume #annotations will be updated to support that too.
For now you have a couple of options.
1) One possible way is to pass the file contents as a binary string as part of the body parameter. Your API spec would look like:
paths:
/something:
post:
consumes:
- application/json
parameters:
- in: body
name: body
required: true
schema:
$ref: '#/definitions/FileWithMetadata'
...
definitions:
FileWithMetadata:
type: object
required: [file_data]
properties:
file_data:
type: string
format: binary # or format: byte
metadata:
type: object
additionalProperties:
type: string
2) Another possible way is to send the metadata names and values as separate arrays, so you would have 3 form parameters: file, array of key names, and array of key values. This is analog to:
curl -F "file=#foo.zip" -F "metadata_keys=description,author" -F "metadata_values=A ZIP file,Helen" https://api.example.com
Your API spec would look like this:
paths:
/something:
post:
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: file
required: true
- in: formData
name: metadata_keys
type: array
items:
type: string
- in: formData
name: metadata_values
type: array
items:
type: string
I have my ELK stack configured and running using log4j and everything is working fine. What I would like to be able to do is group all exceptions by their type, for example - create a terms graph and have a term for each exception type like FileNotFound, NullPointerException and so on. I already have a stack_trace field which includes the exception type at the first line, and then the complete stack trace. I found something online like this:
filter{
mutate {
gsub => [
"stack_trace", "\n.*", ""
]
}
}
but this would just override the stack_trace field with it's first line, which is not what I want. I want to add a new field that takes out the first line, the exception type, of the stack_trace field.
Make a copy of the stack trace field and perform your gsub on that
filter{
mutate {
add_field => {
"exception" => "%{stack_trace}"
}
}
mutate {
gsub => [
"exception", "\n.*", ""
]
}
}
EDIT: Thanks to #Alpha for pointing out this question, you may need to use two separate mutates.
Short: I'd like to know the name of this format!
I would like to know if this is a special common format or just a simple self-generated config file:
scenes : {
Scene : {
class : Scene
sources : {
Game Capture : {
render : 1
class : GraphicsCapture
data : {
window : "[duke3d]: Duke Nukem 3D Atomic Edition 1.4.3 STABLE"
windowClass : SDL_app
executable : duke3d.exe
stretchImage : 0
alphaBlend : 0
ignoreAspect : 0
captureMouse : 1
invertMouse : 0
safeHook : 0
useHotkey : 0
hotkey : 123
gamma : 100
}
cx : 1920
cy : 1080
}
}
}
}
My background is, that I would like to read multiple files like this one above. And I don't want to implement a whole new parser for this. That's why I want to fall back on java libraries which have already implemented this feature. But without being aware of such code formats, it's quite difficult to search for this libraries.
// additional info
This is a config file or a "scene file" for Open Broadcaster Software.
Filename extension is .xconfig
This appears to be a config file or a "scene file" for Open Broadcaster Software.
When used with OBS it has a extension of .xconfig
Hope this helps.
-Yang
I got some feedback from the main developer of this files.
As i thought, this is not a know format - just a simple config file.
solved!