I'm new on apache storm and kafka and try to learn these notions via courses provided by OpenClassroom.The principle is simple, messages are sent via a python program to a kafka server, and are retrieved via a kafka spout defined in the main class of a Storm topology. The problem is that I don't understand how the bolt retrieves the messages. From what I understand this is done in the ParsingBolt class with the following line of code: JSONObject obj = (JSONObject)jsonParser.parse(input.getStringByField("value"));. The only thing is that I don't understand how we know that the messages are contained in the value field. Below you can find the main class and the parsing bolt class. (The whole project is available here)
The main class:
package velos;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.generated.AlreadyAliveException;
import org.apache.storm.generated.AuthorizationException;
import org.apache.storm.generated.InvalidTopologyException;
import org.apache.storm.generated.StormTopology;
import org.apache.storm.kafka.spout.KafkaSpout;
import org.apache.storm.kafka.spout.KafkaSpoutConfig;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.topology.base.BaseWindowedBolt;
import org.apache.storm.tuple.Fields;
public class App {
public static void main(String[] args)
throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {
TopologyBuilder builder = new TopologyBuilder();
KafkaSpoutConfig.Builder<String, String> spoutConfigBuilder = KafkaSpoutConfig.builder("localhost:9092",
"velib-stations");
spoutConfigBuilder.setProp(ConsumerConfig.GROUP_ID_CONFIG, "city-stats");
KafkaSpoutConfig<String, String> spoutConfig = spoutConfigBuilder.build();
builder.setSpout("stations", new KafkaSpout<String, String>(spoutConfig));
builder.setBolt("station-parsing", new StationParsingBolt()).shuffleGrouping("stations");
builder.setBolt("city-stats",
new CityStatsBolt().withTumblingWindow(BaseWindowedBolt.Duration.of(1000 * 60 * 5)))
.fieldsGrouping("station-parsing", new Fields("city"));
builder.setBolt("save-results", new SaveResultsBolt()).fieldsGrouping("city-stats", new Fields("city"));
StormTopology topology = builder.createTopology();
Config config = new Config();
config.setMessageTimeoutSecs(60 * 30);
String topologyName = "Velos";
if (args.length > 0 && args[0].equals("remote")) {
StormSubmitter.submitTopology(topologyName, config, topology);
} else {
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(topologyName, config, topology);
}
}
}
The ParsingBolt:
package velos;
import java.util.Map;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;
import org.apache.storm.shade.org.json.simple.JSONObject;
import org.apache.storm.shade.org.json.simple.parser.JSONParser;
import org.apache.storm.shade.org.json.simple.parser.ParseException;
public class StationParsingBolt extends BaseRichBolt {
private OutputCollector outputCollector;
#Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
outputCollector = collector;
}
#Override
public void execute(Tuple input) {
try {
process(input);
} catch (ParseException e) {
e.printStackTrace();
outputCollector.fail(input);
}
}
public void process(Tuple input) throws ParseException {
JSONParser jsonParser = new JSONParser();
JSONObject obj = (JSONObject)jsonParser.parse(input.getStringByField("value"));
String contract = (String)obj.get("contract_name");
Long availableStands = (Long)obj.get("available_bike_stands");
Long stationNumber = (Long)obj.get("number");
outputCollector.emit(new Values(contract, stationNumber, availableStands));
outputCollector.ack(input);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("city", "station_id", "available_stands"));
}
}
By default the "topic", "partition", "offset", "key", and "value" will be emitted to the "default" stream.
https://storm.apache.org/releases/2.4.0/storm-kafka-client.html
Use a RecordTranslator to change this.
Related
I am trying to use "MiniKdc" in my code implementation like "MiniKdc.main(config)" but am getting error "can not resolve symbol 'MiniKdc' ".
I am following this example https://www.baeldung.com/spring-security-kerberos-integration
i have added this dependecy in my build.gradle
implementation 'org.springframework.security.kerberos:spring-security-kerberos-test:1.0.1.RELEASE'
i tried to search the dependecy from maven central/repository and i can't find it.
here is the class i am working on, i want to be able to import Minikdc in the second import statement.
import org.apache.commons.io.FileUtils;
import org.springframework.security.kerberos.test.MiniKdc;
import java.io.File;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
class KerberosMiniKdc {
private static final String KRB_WORK_DIR = ".\\spring-security-sso\\spring-security-sso-kerberos\\krb-test-workdir";
public static void main(String[] args) throws Exception {
String[] config = MiniKdcConfigBuilder.builder()
.workDir(prepareWorkDir())
.confDir("minikdc-krb5.conf")
.keytabName("example.keytab")
.principals("client/localhost", "HTTP/localhost")
.build();
MiniKdc.main(config);
}
private static String prepareWorkDir() throws IOException {
Path dir = Paths.get(KRB_WORK_DIR);
File directory = dir.normalize().toFile();
FileUtils.deleteQuietly(directory);
FileUtils.forceMkdir(directory);
return dir.toString();
}
}
is there anything am doing wrong?
As of 2021, spring-security-kerberos is not well maintained.
I suggest using Apache Kerby instead, either directly or via other library like Kerb4J. See an example here.
package com.kerb4j;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kerby.kerberos.kerb.client.KrbConfig;
import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import java.io.File;
public class KerberosSecurityTestcase {
private static final Log log = LogFactory.getLog(KerberosSecurityTestcase.class);
private static int i = 10000;
protected int kdcPort;
private SimpleKdcServer kdc;
private File workDir;
private KrbConfig conf;
#BeforeAll
public static void debugKerberos() {
System.setProperty("sun.security.krb5.debug", "true");
}
#BeforeEach
public void startMiniKdc() throws Exception {
kdcPort = i++;
createTestDir();
createMiniKdcConf();
log.info("Starting Simple KDC server on port " + kdcPort);
kdc = new SimpleKdcServer(workDir, conf);
kdc.setKdcPort(kdcPort);
kdc.setAllowUdp(false);
kdc.init();
kdc.start();
}
#AfterEach
public void stopMiniKdc() throws Exception {
log.info("Stopping Simple KDC server on port " + kdcPort);
if (kdc != null) {
kdc.stop();
log.info("Stopped Simple KDC server on port " + kdcPort);
}
}
public void createTestDir() {
workDir = new File(System.getProperty("test.dir", "target"));
}
public void createMiniKdcConf() {
conf = new KrbConfig();
}
public SimpleKdcServer getKdc() {
return kdc;
}
public File getWorkDir() {
return workDir;
}
public KrbConfig getConf() {
return conf;
}
}
Disclaimer: I'm the author of Kerb4J
I wrote a pattern. I have a list for conditions(gettin rules from json).Data(json) is coming form kafka server . I want to filter the data with this list. But it is not working. How can I do that?
I am not sure about keyedstream and alarms in for. Can flink work like this?
main program:
package cep_kafka_eample.cep_kafka;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.google.gson.Gson;
import com.google.gson.JsonArray;
import com.google.gson.JsonParser;
import org.apache.flink.cep.CEP;
import org.apache.flink.cep.PatternSelectFunction;
import org.apache.flink.cep.PatternStream;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.assigners.SlidingProcessingTimeWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010;
import org.apache.flink.streaming.util.serialization.JSONDeserializationSchema;
import util.AlarmPatterns;
import util.Rules;
import util.TypeProperties;
import java.io.FileReader;
import java.util.*;
public class MainClass {
public static void main( String[] args ) throws Exception
{
ObjectMapper mapper = new ObjectMapper();
JsonParser parser = new JsonParser();
Object obj = parser.parse(new FileReader(
"c://new 5.json"));
JsonArray array = (JsonArray)obj;
Gson googleJson = new Gson();
List<Rules> ruleList = new ArrayList<>();
for(int i = 0; i< array.size() ; i++) {
Rules jsonObjList = googleJson.fromJson(array.get(i), Rules.class);
ruleList.add(jsonObjList);
}
//apache kafka properties
Properties properties = new Properties();
properties.setProperty("zookeeper.connect", "localhost:2181");
properties.setProperty("bootstrap.servers", "localhost:9092");
//starting flink
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(1000).setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
//get kafka values
FlinkKafkaConsumer010<ObjectNode> myConsumer = new FlinkKafkaConsumer010<>("demo", new JSONDeserializationSchema(),
properties);
List<Pattern<ObjectNode,?>> patternList = new ArrayList<>();
DataStream<ObjectNode> dataStream = env.addSource(myConsumer);
dataStream.windowAll(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)));
DataStream<ObjectNode> keyedStream = dataStream;
//get pattern list, keyeddatastream
for(Rules rules : ruleList){
List<TypeProperties> typePropertiesList = rules.getTypePropList();
for (int i = 0; i < typePropertiesList.size(); i++) {
TypeProperties typeProperty = typePropertiesList.get(i);
if (typeProperty.getGroupType() != null && typeProperty.getGroupType().equals("group")) {
keyedStream = keyedStream.keyBy(
jsonNode -> jsonNode.get(typeProperty.getPropName().toString())
);
}
}
Pattern<ObjectNode,?> pattern = new AlarmPatterns().getAlarmPattern(rules);
patternList.add(pattern);
}
//CEP pattern and alarms
List<DataStream<Alert>> alertList = new ArrayList<>();
for(Pattern<ObjectNode,?> pattern : patternList){
PatternStream<ObjectNode> patternStream = CEP.pattern(keyedStream, pattern);
DataStream<Alert> alarms = patternStream.select(new PatternSelectFunction<ObjectNode, Alert>() {
private static final long serialVersionUID = 1L;
public Alert select(Map<String, List<ObjectNode>> map) throws Exception {
return new Alert("new message");
}
});
alertList.add(alarms);
}
env.execute("Flink CEP monitoring job");
}
}
getAlarmPattern:
package util;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.cep.pattern.conditions.IterativeCondition;
import org.apache.flink.streaming.api.datastream.DataStream;
import com.fasterxml.jackson.databind.node.ObjectNode;
public class AlarmPatterns {
public Pattern<ObjectNode, ?> getAlarmPattern(Rules rules) {
//MySimpleConditions conditions = new MySimpleConditions();
Pattern<ObjectNode, ?> alarmPattern = Pattern.<ObjectNode>begin("first")
.where(new IterativeCondition<ObjectNode>() {
#Override
public boolean filter(ObjectNode jsonNodes, Context<ObjectNode> context) throws Exception {
for (Criterias criterias : rules.getCriteriaList()) {
if (criterias.getCriteriaType().equals("equals")) {
return jsonNodes.get(criterias.getPropName()).equals(criterias.getCriteriaValue());
} else if (criterias.getCriteriaType().equals("greaterThen")) {
if (!jsonNodes.get(criterias.getPropName()).equals(criterias.getCriteriaValue())) {
return false;
}
int count = 0;
for (ObjectNode node : context.getEventsForPattern("first")) {
count += node.get("value").asInt();
}
return Integer.compare(count, 5) > 0;
} else if (criterias.getCriteriaType().equals("lessThen")) {
if (!jsonNodes.get(criterias.getPropName()).equals(criterias.getCriteriaValue())) {
return false;
}
int count = 0;
for (ObjectNode node : context.getEventsForPattern("first")) {
count += node.get("value").asInt();
}
return Integer.compare(count, 5) < 0;
}
}
return false;
}
}).times(rules.getRuleCount());
return alarmPattern;
}
}
Thanks for using FlinkCEP!
Could you provide some more details about what exactly is the error message (if any)? This will help a lot at pinning down the problem.
From a first look at the code, I can make the following observations:
At first, the line:
dataStream.windowAll(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)));
will never be executed, as you never use this stream in the rest of your program.
Second, you should specify a sink to be taken after the select(), e.g. print() method on each of your PatternStreams. If you do not do so, then your output gets discarded. You can have a look here for examples, although the list is far from exhaustive.
Finally, I would recommend adding a within() clause to your pattern, so that you do not run out of memory.
Error was from my json object. I will fix it. When i am run job on intellij cep doesn't work. When submit from flink console it works.
I'm using spring batch with java configuration (new to this) and I'm running into a error when trying to use a ClassifierCompositeItemWriter so generate separate files based on a classifier.
The error im getting is org.springframework.batch.item.WriterNotOpenException: Writer must be open before it can be written to
My configuration looks like follows:
package com.infonova.btcompute.batch.geneva.job;
import com.infonova.btcompute.batch.billruntransfer.BillRunTranStatusFinishedJobAssignment;
import com.infonova.btcompute.batch.billruntransfer.BillRunTranStatusInprogressJobAssignment;
import com.infonova.btcompute.batch.billruntransfer.BillRunTransferStatus;
import com.infonova.btcompute.batch.geneva.camel.GenevaJobLauncher;
import com.infonova.btcompute.batch.geneva.dto.GenevaDetailsResultsDto;
import com.infonova.btcompute.batch.geneva.dto.GenveaDetailsTransactionDto;
import com.infonova.btcompute.batch.geneva.properties.GenevaDetailsExportJobProperties;
import com.infonova.btcompute.batch.geneva.rowmapper.GenevaDetailsTransactionsRowMapper;
import com.infonova.btcompute.batch.geneva.steps.*;
import com.infonova.btcompute.batch.repository.BillrunTransferStatusMapper;
import com.infonova.btcompute.batch.utils.FileNameGeneration;
import com.infonova.product.batch.camel.CamelEnabledJob;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.item.file.FlatFileItemWriter;
import org.springframework.batch.item.support.ClassifierCompositeItemWriter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.classify.BackToBackPatternClassifier;
import org.springframework.context.annotation.Bean;
import org.springframework.core.io.FileSystemResource;
import org.springframework.jdbc.core.JdbcTemplate;
import javax.sql.DataSource;
import java.io.File;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
public abstract class AbstractGenevaDetailsExportJob extends CamelEnabledJob {
private static final Logger LOGGER = LoggerFactory.getLogger(AbstractGenevaDetailsExportJob.class);
#Autowired
protected JobBuilderFactory jobBuilders;
#Autowired
protected StepBuilderFactory stepBuilders;
#Autowired
protected DataSource datasource;
#Autowired
private BillrunTransferStatusMapper billrunTransferStatusMapper;
#Autowired
protected JdbcTemplate jdbcTemplate;
public abstract GenevaDetailsExportJobProperties jobProperties();
#Bean
public RouteBuilder routeBuilder(final GenevaDetailsExportJobProperties jobProperties, final Job job) {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from(jobProperties.getConsumer())
.transacted("PROPAGATION_REQUIRED")
.routeId(jobProperties.getInputRouteName())
.process(genevaJobLauncher(job));
//.to("ftp://app#127.0.0.1?password=secret");
}
};
}
#Bean
public Processor genevaJobLauncher(Job job) {
return new GenevaJobLauncher(job);
}
#Bean
#StepScope
public GenevaDetailsReader reader() {
GenevaDetailsReader reader = new GenevaDetailsReader(jobProperties().getMandatorKey(),
jobProperties().getInvoiceType(), jobProperties().getSqlResourcePath());
reader.setSql("");
reader.setDataSource(datasource);
reader.setRowMapper(new GenevaDetailsTransactionsRowMapper());
reader.setFetchSize(jobProperties().getFetchSize());
return reader;
}
#Bean
#StepScope
public GenevaDetailsItemProcessor processor() {
return new GenevaDetailsItemProcessor();
}
#Bean
#StepScope
public ClassifierCompositeItemWriter writer() {
List<String> serviceCodes = new ArrayList<>();//billrunTransferStatusMapper.getServiceCodes(jobProperties().getMandatorKey());
Long billingTaskId = billrunTransferStatusMapper.getCurrentTaskId(jobProperties().getMandatorKey());
String countryKey = billrunTransferStatusMapper.getCountryKey(billingTaskId);
serviceCodes.add("BTCC");
serviceCodes.add("CCMS");
BackToBackPatternClassifier classifier = new BackToBackPatternClassifier();
classifier.setRouterDelegate(new GenveaDetailsRouterClassifier());
HashMap<String, Object> map = new HashMap<>();
for (String serviceCode : serviceCodes) {
map.put(serviceCode, genevaDetailsWriter(serviceCode, countryKey));
}
classifier.setMatcherMap(map);
ClassifierCompositeItemWriter<GenveaDetailsTransactionDto> writer = new ClassifierCompositeItemWriter<>();
writer.setClassifier(classifier);
return writer;
}
#Bean
#StepScope
public GenevaDetailsFlatFileItemWriter genevaDetailsWriter(String serviceCode, String countryKey) {
GenevaDetailsFlatFileItemWriter writer = new GenevaDetailsFlatFileItemWriter(jobProperties().getDelimiter());
FileNameGeneration fileNameGeneration = new FileNameGeneration();
try {
FileSystemResource fileSystemResource = new FileSystemResource(new File(jobProperties().getExportDir(), fileNameGeneration.generateFileName(jdbcTemplate,
serviceCode, countryKey)));
writer.setResource(fileSystemResource);
} catch (SQLException e) {
LOGGER.error("Error creating FileSystemResource : " + e.getMessage());
}
return writer;
}
#Bean
public Job job() {
return jobBuilders.get(jobProperties().getJobName())
.start(setBillRunTransferStatusDetailInprogressStep())
.next(processGenevaDetailsStep())
.next(setBillRunTransferStatusProcessedStep())
.build();
}
#Bean
public Step setBillRunTransferStatusDetailInprogressStep() {
return stepBuilders.get("setBillRunTransferStatusDetailInprogressStep")
.tasklet(setBillRunTransferStatusDetailInprogress())
.build();
}
#Bean
public Tasklet setBillRunTransferStatusDetailInprogress() {
return new BillRunTranStatusInprogressJobAssignment(BillRunTransferStatus.SUMMARY.toString(), BillRunTransferStatus.DETAILS_INPROGRESS.toString(),
jobProperties().getMandatorKey(), jobProperties().getInvoiceTypeNum(), jobProperties().getReportTypeNum());
}
#Bean
public Step setBillRunTransferStatusProcessedStep() {
return stepBuilders.get("setBillRunTransferStatusProcessedStep")
.tasklet(setBillRunTransferStatusProcessed())
.build();
}
#Bean
public Tasklet setBillRunTransferStatusProcessed() {
return new BillRunTranStatusFinishedJobAssignment(BillRunTransferStatus.PROCESSED.toString());
}
#Bean
public Step processGenevaDetailsStep() {
return stepBuilders.get("processGenevaDetailsStep")
.<GenveaDetailsTransactionDto, GenevaDetailsResultsDto>chunk(jobProperties().getChunkSize())
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
}
and my writer looks like:
package com.infonova.btcompute.batch.geneva.steps;
import com.infonova.btcompute.batch.geneva.dto.GenevaDetailsResultsDto;
import com.infonova.btcompute.batch.repository.BillrunTransferStatusMapper;
import com.infonova.btcompute.batch.utils.FileNameGeneration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.item.*;
import org.springframework.batch.item.file.FlatFileHeaderCallback;
import org.springframework.batch.item.file.FlatFileItemWriter;
import org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor;
import org.springframework.batch.item.file.transform.DelimitedLineAggregator;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.FileSystemResource;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import java.io.File;
import java.io.IOException;
import java.io.Writer;
import java.sql.SQLException;
import java.util.Iterator;
import java.util.List;
#Component
public class GenevaDetailsFlatFileItemWriter extends FlatFileItemWriter<GenevaDetailsResultsDto> {
private static final Logger LOGGER = LoggerFactory.getLogger(GenevaDetailsFlatFileItemWriter.class);
#Autowired
protected JdbcTemplate jdbcTemplate;
#Autowired
private BillrunTransferStatusMapper billrunTransferStatusMapper;
private String delimiter;
public GenevaDetailsFlatFileItemWriter(String delimiter) {
this.delimiter = delimiter;
this.setLineAggregator(getLineAggregator());
this.setHeaderCallback(getHeaderCallback());
}
private DelimitedLineAggregator<GenevaDetailsResultsDto> getLineAggregator() {
DelimitedLineAggregator<GenevaDetailsResultsDto> delLineAgg = new DelimitedLineAggregator<>();
delLineAgg.setDelimiter(delimiter);
BeanWrapperFieldExtractor<GenevaDetailsResultsDto> fieldExtractor = new BeanWrapperFieldExtractor<>();
fieldExtractor.setNames(getNames());
delLineAgg.setFieldExtractor(fieldExtractor);
return delLineAgg;
}
private String[] getHeaderNames() {
return new String[] {"Record ID", "Service Identifier", "Billing Account Reference", "Cost Description", "Event Cost",
"Event Date and Time", "Currency Code", "Charge Category", "Order Identifier", "Net Usage", "UOM",
"Quantity", "Service Start Date", "Service End Date"};
}
private String[] getNames() {
return new String[] {"RECORD_ID", "SERVICE_CODE", "BILLING_ACCOUNT_REFERENCE", "COST_DESCRIPTION", "EVENT_COST",
"EVENT_DATE_AND_TIME", "CURRENCY_CODE", "CHARGE_CATEGORY", "ORDER_IDENTIFIER", "NET_USAGE", "UOM",
"QUANTITY", "SERVICE_START_DATE", "SERVICE_END_DATE"};
}
private FlatFileHeaderCallback getHeaderCallback()
{
return new FlatFileHeaderCallback() {
#Override
public void writeHeader(Writer writer) throws IOException {
writer.write(String.join(delimiter, getHeaderNames()));
}
};
}
// #BeforeStep
// public void beforeStep(StepExecution stepExecution) {
// billingTaskId = (Long) stepExecution.getJobExecution().getExecutionContext().get("billingTaskId");
// FileNameGeneration fileNameGeneration = new FileNameGeneration();
//
// try {
// FileSystemResource fileSystemResource = new FileSystemResource(new File(exportDir, fileNameGeneration.generateFileName(jdbcTemplate,
// serviceCode, billrunTransferStatusMapper.getCountryKey(billingTaskId))));
// setResource(fileSystemResource);
// } catch (SQLException e) {
// LOGGER.error("Error creating FileSystemResource : " + e.getMessage());
// }
// }
}
I have searched the web and cannot find a solution to this issue.
What #Hansjoerg Wingeier wrote about ClassifierCompositeItemWriter is correct, but the right way to resolve the problem is to register delegated writer(s) as stream(s) using AbstractTaskletStepBuilder.stream() to let SB manage execution context lifecycle.
ClassifierCompositeItemWriter does not implement the ItemStream interface, hence the open method of your FlatFileItemWriter is never called.
The easiest thing to do is to call the open method when you create your classifier map:
for (String serviceCode : serviceCodes) {
FlatFileItemWriter writer =genevaDetailsWriter(serviceCode, countryKey);
writer.open (new ExecutionContext ());
map.put(serviceCode, writer);
}
Main class for Subscriber: Application.java
package com.mynamespace;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.ComponentScan;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.contrib.pattern.DistributedPubSubExtension;
import akka.contrib.pattern.DistributedPubSubMediator;
import com.mynamespace.actors.SubscriberActor;
#SpringBootApplication
#ComponentScan(basePackages = "com.mynamespace.*")
public class Application {
public static void main(String[] args) throws InterruptedException {
ApplicationContext ctx = SpringApplication.run(Application.class, args);
// get hold of the actor system
ActorSystem system = ctx.getBean(ActorSystem.class);
ActorRef mediator = DistributedPubSubExtension.get(system).mediator();
ActorRef subscriber = system.actorOf(
Props.create(SubscriberActor.class), "subscriber");
// subscribe to the topic named "content"
mediator.tell(new DistributedPubSubMediator.Put(subscriber), subscriber);
// subscriber.tell("init", null);
System.out.println("Running.");
Thread.sleep(5000l);
}
}
Subscriber actor: SubscriberActor.java
package com.mynamespace.actors;
import java.util.ArrayList;
import java.util.List;
import akka.actor.UntypedActor;
import com.mynamespace.message.CategoryServiceRequest;
import com.mynamespace.message.CategoryServiceResponse;
public class SubscriberActor extends UntypedActor {
#Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof CategoryServiceRequest) {
System.out.println("Request received for GetCategories.");
CategoryServiceResponse response = new CategoryServiceResponse();
List<String> categories = new ArrayList<>();
categories.add("Food");
categories.add("Fruits");
response.setCatgories(categories);
getSender().tell(response, getSelf());
} else if (msg instanceof String && msg.equals("init")) {
System.out.println("init called");
} else {
System.out
.println("Unhandelled message received for getCategories.");
}
}
}
Application.conf for subscriber
akka {
loglevel = INFO
stdout-loglevel = INFO
loggers = ["akka.event.slf4j.Slf4jLogger"]
extensions = ["akka.contrib.pattern.DistributedPubSubExtension"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://mynamespace-actor-system#127.0.0.1:2551",
"akka.tcp://mynamespace-actor-system#127.0.0.1:2552"]
auto-down-unreachable-after = 10s
}
}
Main class for publisher: Application.java
package com.mynamespace;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.ComponentScan;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.contrib.pattern.DistributedPubSubExtension;
import akka.contrib.pattern.DistributedPubSubMediator;
import com.mynamespace.actors.PublisherActor;
#SpringBootApplication
#ComponentScan(basePackages = "com.mynamespace.*")
public class Application {
public static void main(String[] args) throws InterruptedException {
ApplicationContext ctx = SpringApplication.run(Application.class, args);
// get hold of the actor system
ActorSystem system = ctx.getBean(ActorSystem.class);
ActorRef mediator = DistributedPubSubExtension.get(system).mediator();
ActorRef publisher = system.actorOf(Props.create(PublisherActor.class),
"publisher");
mediator.tell(new DistributedPubSubMediator.Put(publisher), publisher);
Thread.sleep(5000);
publisher.tell("hi", publisher);
System.out.println("Running.");
}
}
PublisherActor.java
package com.mynamespace.actors;
import scala.concurrent.Future;
import akka.actor.ActorRef;
import akka.actor.UntypedActor;
import akka.contrib.pattern.DistributedPubSubExtension;
import akka.contrib.pattern.DistributedPubSubMediator;
import akka.dispatch.Mapper;
import akka.pattern.Patterns;
import akka.util.Timeout;
import com.mynamespace.message.CategoryServiceRequest;
import com.mynamespace.message.CategoryServiceResponse;
public class PublisherActor extends UntypedActor {
// activate the extension
ActorRef mediator = DistributedPubSubExtension.get(getContext().system())
.mediator();
public void onReceive(Object msg) {
if (msg instanceof String) {
Timeout timeOut = new Timeout(50000l);
mediator.tell(new DistributedPubSubMediator.Send(
"/user/subscriber", new CategoryServiceRequest()),
getSelf());
Future<Object> response = Patterns.ask(mediator,
new DistributedPubSubMediator.Send("/user/subscriber",
new CategoryServiceRequest()), timeOut);
Future<CategoryServiceResponse> finalresponse = response.map(
new Mapper<Object, CategoryServiceResponse>() {
#Override
public CategoryServiceResponse apply(Object parameter) {
CategoryServiceResponse responseFromRemote = (CategoryServiceResponse) parameter;
System.out.println("received:: list of size:: "
+ responseFromRemote.getCatgories().size());
return responseFromRemote;
}
}, getContext().system().dispatcher());
} else if (msg instanceof DistributedPubSubMediator.SubscribeAck) {
System.out.println("subscribbed.......");
} else {
unhandled(msg);
}
}
}
Application conf for publisher is same as of subscriber. Both are running on different ports on the same system.
I have two seed nodes defined and running on my local system. Somehow I am not able to ASK/TELL subscriber from producer (both running on different nodes) via DistributedPubSub Mediator.
After running Subscriber then publisher: I don't get any exceptions or any dead letter references printed in stdout/logs.
Is it possible to be able to view what actor references my mediator holds?
Need help to find issues or possible issues.
I had the same problem, after the comments from #spam and my own experiments the thing I can recommend is to use Publish/Subscribe with groups and sendOneMessageToEachGroup=true.
Is it supposed that the Send only works locally? if so the documentation doesn't explicit that. But I can also tell by the code there that this specific part of the documentation has been overlooked apparently (as change the Class names but then don't invoke those, invoke the previous ones on the previous examples)
Hope this helps anyone that has this issue, as the docs are a bit misleading apparently
I am trying to query vendors using the QBOVendorService but having no luck.
I am creating the service as follows:
QBOVendorService vService = QBServiceFactory.getService(context, QBOVendorService.class);
where the context is a valid PlatformSessionContext. I know the platform session context is good since I can get information about the user with it. When I try
vService.addVendor(context, vendor);
I end up with a NPE like my vService is null. Shouldn't I get an error initializing the QBOVendorService if it fails? Is there a good place to find more examples for using this since the intuit developer forums have been shut down?
I'm sharing a sample code snippet. Replace your OAuth tokens and relamId. It should work fine.
import java.util.ArrayList;
import java.util.Calendar;
import java.util.Iterator;
import java.util.List;
import com.intuit.ds.qb.QBIdType;
import com.intuit.ds.qb.QBVendor;
import com.intuit.ds.qb.QBVendorQuery;
import com.intuit.ds.qb.QBVendorService;
import com.intuit.ds.qb.QBInvalidContextException;
import com.intuit.ds.qb.QBObjectFactory;
import com.intuit.ds.qb.QBServiceFactory;
import com.intuit.ds.qb.impl.QBRecordCountImpl;
import com.intuit.ds.qb.qbd.QBDRecordCountService;
import com.intuit.ds.qb.qbd.QBDServiceFactory;
import com.intuit.platform.client.PlatformSessionContext;
import com.intuit.platform.client.PlatformServiceType;
import com.intuit.platform.client.security.OAuthCredentials;
import com.intuit.ds.qb.QBSyncStatusRequest;
import com.intuit.ds.qb.QBSyncStatusRequestService;
import com.intuit.ds.qb.QBSyncStatusResponse;
import com.intuit.sb.cdm.NgIdSet;
import com.intuit.sb.cdm.ObjectName;
import org.slf4j.Logger;
// QBD API Docs - https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/v2/0500_quickbooks_windows/0600_object_reference/vendor
// QBO API Docs - https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/v2/0400_quickbooks_online/vendor
// JavaDocs - http://developer-static.intuit.com/SDKDocs/QBV2Doc/ipp-java-devkit-2.0.10-SNAPSHOT-javadoc/
public class CodegenStubVendorall {
final PlatformSessionContext context;
public CodegenStubVendorall(PlatformSessionContext context) {
this.context = context;
}
public void testAdd() {
final List<QBVendor> entityList = new ArrayList<QBVendor>();
try {
QBVendorService service = QBServiceFactory.getService(context, QBVendorService.class);
//Your Code
//Use Vendor POJO for creating Vendor
}
} catch (QBInvalidContextException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
PlatformSessionContext context = getPlatformContext();
CodegenStubVendorall testObj = new CodegenStubVendorall(context);
testObj.testAdd();
}
public static PlatformSessionContext getPlatformContext() {
String accesstoken = "rplce_your_application_token";
String accessstokensecret = "rplce_your_application_token";
String appToken = "rplce_your_application_token";
String oauth_consumer_key = "rplce_your_application_token";
String oauth_consumer_secret = "rplce_your_application_token";
String realmID = "123456";
String dataSource = "QBO";
PlatformServiceType serviceType;
if (dataSource.equalsIgnoreCase("QBO")) {
serviceType = PlatformServiceType.QBO;
} else {
serviceType = PlatformServiceType.QBD;
}
final OAuthCredentials oauthcredentials = new OAuthCredentials(
oauth_consumer_key, oauth_consumer_secret, accesstoken,
accessstokensecret);
final PlatformSessionContext context = new PlatformSessionContext(
oauthcredentials, appToken, serviceType, realmID);
return context;
}
}
You can try to use ApiExplorer tool to verify your OAuth tokens and to check the create Vendor API endpoint.
Link - https://developer.intuit.com/apiexplorer?apiname=V2QBO
Please let me know how it goes.
Thanks