Flink 1.4.2 SQL Maps? - java

I am currently using Flink V 1.4.2
If I have a POJO:
class CustomObj{
public Map<String, String> custTable = new HashMap<>();
public Map<String, String> getcustTable(){ return custTable; }
public void setcustTable(Map<String, String> custTable){
this.custTable = custTable;
}
}
I have a DataStream<POJO> ds = //from some kafka source
Now I do tableEnv.registerDataStream("tableName", ds);
And I want to run
tableEnv.sqlQuery("SELECT * FROM tableName WHERE custTable['key'] = 'val'");
When I try running this I get the error:
org.apache.flink.table.api.TableException: Type is not supported: ANY
What can I do about this and how can I fix it?

Related

Will Kafka flapmapValues split the record into multiple records when passing json array object?

I'm using confluent 5.0.0 version*
I've a JSON array like below :
{
"name" : "David,Corral,Babu",
"age" : 23
}
and by using kafka streams, I want to split the above record into two based on criteria of comma in the value of the "name" key. The output should be something like :
{
"name" : "David",
"age" : 23
},
{
"name" : "Corral",
"age" : 23
},
{
"name" : "Babu",
"age" : 23
}
For this I'm using "flatMapValues". But so far I'm not able to achieve
the expected results.
But wanted to check if "flatmapValues" is the correct function to be used
for my requirement?
I've used following code:
package test;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.*;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.ValueMapper;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.apache.kafka.streams.kstream.Produced;
import java.util.Arrays;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
public class RecordSplliter {
public static void main(String[] args) throws Exception {
System.out.println("** STARTING RecordSplliter STREAM APP **");
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "json-e44nric2315her");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, PersonSeder.class);
final Serde<String> stringSerde = Serdes.String();
final StreamsBuilder builder = new StreamsBuilder();
// Consume JSON and enriches it
KStream<String, Person> source = builder.stream("streams-plaintext-input");
KStream<String, String> output = source
.flatMapValues(person -> Arrays.asList(person.getName().split(",")));
output.to("streams-output");
final Topology topology = builder.build();
final KafkaStreams streams = new KafkaStreams(topology, props);
final CountDownLatch latch = new CountDownLatch(1);
// Attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
#Override
public void run() {
streams.close();
latch.countDown();
}
});
try {
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
}
During runtime I've got following exception:
08:31:10,822 ERROR
org.apache.kafka.streams.processor.internals.AssignedStreamsTasks -
stream-thread [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-
StreamThread-1] Failed to process stream task 0_0 due to the following
error:
org.apache.kafka.streams.errors.StreamsException: Exception caught in
process. taskId=0_0, processor=KSTREAM-SOURCE-0000000000, topic=streams-
plaintext-input, partition=0, offset=0
at
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:304)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:409)
at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:957)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:832)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: myapps.PersonSerializer) is not compatible to the actual key or value type (key type: unknown because key is null / value type: java.lang.String). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:94)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.kstream.internals.KStreamFlatMapValues$KStreamFlatMapValuesProcessor.process(KStreamFlatMapValues.java:42)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:288)
... 6 more
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to myapps.Person
at myapps.PersonSerializer.serialize(PersonSerializer.java:1)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:154)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:98)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)
... 18 more
08:31:10,827 INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-StreamThread-1] State transition from RUNNING to PENDING_SHUTDOWN
08:31:10,827 INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-StreamThread-1] Shutting down
08:31:10,833 INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
08:31:10,843 INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
08:31:10,843 INFO org.apache.kafka.streams.KafkaStreams - stream-client [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387] State transition from RUNNING to ERROR
08:31:10,843 WARN org.apache.kafka.streams.KafkaStreams - stream-client [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387] All stream threads have died. The instance will be in error state and should be closed.
08:31:10,843 INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-StreamThread-1] Shutdown complete
Exception in thread "json-enricher-0f8bc964-40c0-41f2-a724-dfa638923387-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=KSTREAM-SOURCE-0000000000, topic=streams-plaintext-input, partition=0, offset=0
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:304)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:409)
at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:957)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:832)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: myapps.PersonSerializer) is not compatible to the actual key or value type (key type: unknown because key is null / value type: java.lang.String). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:94)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.kstream.internals.KStreamFlatMapValues$KStreamFlatMapValuesProcessor.process(KStreamFlatMapValues.java:42)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:288)
... 6 more
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to myapps.Person
at myapps.PersonSerializer.serialize(PersonSerializer.java:1)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:154)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:98)
at
org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)
... 18 more
Exception is because your flatMapValues produced value of type String. In your code you don't pass any Produced to KStream::to function so it tries to use default one (passed in properties), which in your case is PersonSeder.class.
Your values are of type String, but PersonSeder.class is used to serializatoin.
If you would like to split it you need something like this
KStream<String, Person> output = source
.flatMapValues(person ->
Arrays.stream(person.getName().split(","))
.map(name -> new Person(name, person.getAge()))
.collect(Collectors.toList()));
I've used following code with your serializer and with deserializer, that is symmetrical (also using a Gson) and it works
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "app1");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, PersonSerdes.class);
final StreamsBuilder builder = new StreamsBuilder();
KStream<String, Person> source = builder.stream("input");
KStream<String, Person> output = source
.flatMapValues(person ->
Arrays.stream(person.getName()
.split(","))
.map(name -> new Person(name, person.getAge()))
.collect(Collectors.toList()));
output.to("output");
KafkaStreams streams = new KafkaStreams(builder.build(), props);
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
UPDATE 1:
According to your question regarding using json instead POJO, everything depends on your Sedes. If you use Generic Serdes you can serialize and deserialize to/from Json (Map)
Below is simple MapSerdes, that can be used for that and sample code of usage.
import com.google.gson.Gson;
import com.google.gson.reflect.TypeToken;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import java.lang.reflect.Type;
import java.nio.charset.Charset;
import java.util.Map;
public class MapSerdes implements Serde<Map<String, String>> {
private static final Charset CHARSET = Charset.forName("UTF-8");
#Override
public void configure(Map<String, ?> configs, boolean isKey) {}
#Override
public void close() {}
#Override
public Serializer<Map<String, String>> serializer() {
return new Serializer<Map<String, String>>() {
private Gson gson = new Gson();
#Override
public void configure(Map<String, ?> configs, boolean isKey) {}
#Override
public byte[] serialize(String topic, Map<String, String> data) {
String line = gson.toJson(data); // Return the bytes from the String 'line'
return line.getBytes(CHARSET);
}
#Override
public void close() {}
};
}
#Override
public Deserializer<Map<String, String>> deserializer() {
return new Deserializer<Map<String, String>>() {
private Type type = new TypeToken<Map<String, String>>(){}.getType();
private Gson gson = new Gson();
#Override
public void configure(Map<String, ?> configs, boolean isKey) {}
#Override
public Map<String, String> deserialize(String topic, byte[] data) {
Map<String,String> result = gson.fromJson(new String(data), type);
return result;
}
#Override
public void close() {}
};
}
}
Sample usage:
Instead name, depends on your map you can use different properties.
public class GenericJsonSplitterApp {
public static void main(String[] args) {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "app1");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, MapSerdes.class);
final StreamsBuilder builder = new StreamsBuilder();
KStream<String, Map<String, String>> source = builder.stream("input");
KStream<String, Map<String, String>> output = source
.flatMapValues(map ->
Arrays.stream(map.get("name")
.split(","))
.map(name -> {
HashMap<String, String> splittedMap = new HashMap<>(map);
splittedMap.put("name", name);
return splittedMap;
})
.collect(Collectors.toList()));
output.to("output");
KafkaStreams streams = new KafkaStreams(builder.build(), props);
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
}
}

How to pass HashMap to forEach tag in xls generated by jett?

I have a Map in managed bean
private Map<FaseProducao, Set<FichaTecnicaOperacao>> fichasTecnicasOperacaoResumo;
that reference to entity FichaTecnica:
public class FichaTecnica{
//...
private Set<FichaTecnicaOperacao> operacoes;
}
which I need to pass as a parameter on a beans.put () to generate an xls with jett:
public void createRelatorioFichaTecnica(FichaTecnica fichaTecnica) throws IOException {
//ommited...
Map<String, Object> beans = new HashMap<String, Object>();
beans.put("operacaoResumo", fichasTecnicasOperacaoResumo);
try (ByteArrayOutputStream saida = new ByteArrayOutputStream();
InputStream template = this.getClass().getResourceAsStream("/templates/jett/fichaTecnica.xls");
Workbook workbook = transformer.transform(template, beans);) {
//ommited...
}
}
when the xls is generated the exception happens:
WARNING [javax.enterprise.resource.webcontainer.jsf.lifecycle] (default task-28) #{ProdutoManagedBean.createRelatorioFichaTecnica(row)}: net.sf.jett.exception.AttributeExpressionException: Expected a "java.util.Collection" for "items", got a "java.util.HashMap": "${operacaoResumo}".
so I'm not understanding this error because a Map is a correct collection? So why does not jett recognize it in items = "$ {operacaoResumo}"? I created this forEach based on the link on the site:
http://jett.sourceforge.net/tags/forEach.html
As #rgettman said I did:
public void createRelatorioFichaTecnica(FichaTecnica fichaTecnica) throws IOException {
//ommited...
Map<String, Object> beans = new HashMap<String, Object>();
beans.put("operacaoResumo", fichasTechicasOperacaoResumo.keySet());
}
thank you all!

Use in spring boot application into in other

I want to integrate my spring boot project into in another.
For this I export the .jar and I put it in the libraries of the other project which is also spring boot.
My .jar is :
https://drive.google.com/file/d/0B96L3Vd9zNeoQzhhcmFjT05vRWc/view?usp=sharing
And my main in the other project is :
#SpringBootApplication
#EnableJpaRepositories
public class UpsysmarocApplicationTestlogApplication {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(UpsysmarocApplicationTestlogApplication.class, args);
TraceabilityLogService traceabilityLogService = context.getBean(TraceabilityLogService.class);
List<Map<String, String>> items = new ArrayList<>();
Map<String, String> item = new HashMap<>();
item.put("element", "Nom");
item.put("oldValue", "Mkharbach2");
item.put("newValue", "Mounji2");
items.add(item);
item = new HashMap<>();
item.put("element", "Prenom");
item.put("oldValue", "Ayoub2");
item.put("newValue", "Said2");
items.add(item);
List<Map<String, String>> connections = new ArrayList<>();
Map<String, String> connection = new HashMap<>();
connection.put("className", "User");
connection.put("originId", "3");
connections.add(connection);
TraceabilityLog traceabilityLog = traceabilityLogService.save("Eladlani2", "CREATION", items, connections);
System.out.println("RETURN => " + traceabilityLog.getId());
}
}
But i want another way that does not ask to instantiate the context but just to use the functionality fornie part our module
So I always wait for the best method that works well and thanks in advance.
Thank you.
To solve the problem I puted the project as maven dependency.

Spark serialization error: When I insert Spark Stream data into HBase

I'm confused about how spark interact with HBase in terms of data format. For instance, when I omitted the 'ERROR' line in the following code snippet, it runs well... but adding the line, I've caught the error saying 'Task not serializable' related to serialization issue.
How do I change the code?
What is the reason why the error happens?
My code is following :
// HBase
Configuration hconfig = HBaseConfiguration.create();
hconfig.set("hbase.zookeeper.property.clientPort", "2222");
hconfig.set("hbase.zookeeper.quorum", "127.0.0.1");
hconn = HConnectionManager.createConnection(hconfig);
HTable htable = new HTable(hconf, Bytes.toBytes(tableName));
// KAFKA configuration
Set<String> topics = Collections.singleton(topic);
Map<String, String> kafkaParams = new HashMap<>();
kafkaParams.put("metadata.broker.list", "localhost:9092");
kafkaParams.put("zookeeper.connect", "localhost:2222");
kafkaParams.put("group.id", "tag_topic_id");
//Spark Stream
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
ssc, String.class, String.class, StringDecoder.class, StringDecoder.class, kafkaParams, topics );
JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> tuple2) {
return tuple2._2();
}
});
JavaDStream<String> records = lines.flatMap(new FlatMapFunction<String, String>() {
#Override
public Iterator<String> call(String x) throws IOException {
////////////// Put into HBase : ERROR /////////////////////
String[] data = x.split(",");
if (null != data && data.length > 2 ){
SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmmss");
String ts = sdf.format(new Date());
Put put = new Put(Bytes.toBytes(ts));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("LINEID"), Bytes.toBytes(data[0]));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("TAGID"), Bytes.toBytes(data[1]));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("VAL"), Bytes.toBytes(data[2]));
htable.put(put); // ***** ERROR ********
htable.close();
}
return Arrays.asList(COLDELIM.split(x)).iterator();
}
});
records.print();
ssc.start();
ssc.awaitTermination();
When I start my application, I met the following error:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2037)
at org.apache.spark.streaming.dstream.DStream$$anonfun$flatMap$1.apply(DStream.scala:554)
at org.apache.spark.streaming.dstream.DStream$$anonfun$flatMap$1.apply(DStream.scala:554)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:682)
at org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:264)
at org.apache.spark.streaming.dstream.DStream.flatMap(DStream.scala:553)
at org.apache.spark.streaming.api.java.JavaDStreamLike$class.flatMap(JavaDStreamLike.scala:172)
at org.apache.spark.streaming.api.java.AbstractJavaDStreamLike.flatMap(JavaDStreamLike.scala:42)
Caused by: java.io.NotSerializableException: org.apache.hadoop.hbase.client.HTable
Serialization stack:
- object not serializable (class: org.apache.hadoop.hbase.client.HTable, value: MCSENSOR;hconnection-0x6839203b)
You have a hint here by serialization debugger
Caused by: java.io.NotSerializableException: org.apache.hadoop.hbase.client.HTable
Serialization stack:
- object not serializable (class: org.apache.hadoop.hbase.client.HTable, value: MCSENSOR;hconnection-0x6839203b)
put the below part inside FlatMapFunction before call method (closure) where you are using it, that should solve the issue
Configuration hconfig = HBaseConfiguration.create();
hconfig.set("hbase.zookeeper.property.clientPort", "2222");
hconfig.set("hbase.zookeeper.quorum", "127.0.0.1");
hconn = HConnectionManager.createConnection(hconfig);
HTable htable = new HTable(hconf, Bytes.toBytes(tableName));

how to dynamically add images in a nested loop (in template.docx) with xdocreport and velocity

please i want to dynamically add images in a nested loop, i use docx as template.
i use xdocreport 1.0.2 and velocity.
here is my code:
List<Obect> structureList=new arrayList<Object>();
HashMap<String, Object> structureMap = new HashMap<String, Object>();
for(Structure sutructure:structureList){
List<Obect> orientationList=new arrayList<Object>();
//orientation can be vertical or horizontal;
for(Orientation orientation: OrientationList){
HashMap<String, Object> orientationMap = new HashMap<String, Object>();
List<Obect> projectionList=new arrayList<Object>();
for(integer projection: projectionList){
HashMap<String, Object> projectionMap = new HashMap<String, Object>();
projectionMap.put("projectionImage", getImageproviderByOrientationAndProjection(orientation, projection);
projectionList.add(projectionMap);
}
orientationMap.put("projections", projectionList);
orientationList.add(orientationMap);
}
structureMap.put("orientation", orientationList);
structureList.add(structureMap);
}
context("structures", structureList)
//my metadata are setting like this:
metadata.addFieldAsImage("projectionImage", "projection.projectionImage");
in my template(.docx) i do this:
"#"foreach($structure in $structures){
"#"foreach($orientation in $structure.orientations){
"#"for($projection in $orientation.projections){
//print image by projection
}
}
}
Use POJO as described in the samples (DeveloperWithImage.java)
/* Load the photos as list in the metadata */
FieldsMetadata metadata = report.createFieldsMetadata();
metadata.load( "photos", Photo.class, true );
report.setFieldsMetadata(metadata);
List<Photo> photos = ...
context.put( "photos", photos );
The Photo.java:
public class Photo {
private IImageProvider photo;
#FieldMetadata( images = { #ImageMetadata( name = "photo" ) }, description="Photo" )
public IImageProvider getPhoto() {
return photo;
}
public void setPhoto(IImageProvider photo) {
this.photo = photo;
}}

Categories

Resources