Junit Dynamic Test Cases data reading from file? - java

I have a scenario where to test the api using the payload coming from text file.Each line in file represents one payload.How can I dynamically generate test cases based on the above scenario.
I tried as below calling one test from other ,but i can only see paraent test passed.
import com.jayway.restassured.http.ContentType;
import com.jayway.restassured.response.Response;
import org.junit.Before;
import org.junit.Test;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import static com.jayway.restassured.RestAssured.given;
public class ExampleTest
{
private TestUtil testUtil;
String payload;
#Before
public void init()
{
testUtil = new TestUtil();
}
#Test
public void runAllTests() throws IOException
{
List<String> request = getFileDataLineByLine();
for(String fileRequest:request)
{
payload=fileRequest;
if(null!=payload) {
testExampleTest();
}
}
}
#Test
public void testExampleTest()
{
String uri = "http://localhost:8080/url";
Response response = given()
.contentType(ContentType.JSON)
.body(payload)
.post(uri)
.then()
.statusCode(200)
.extract()
.response();
}
private List<String> getFileDataLineByLine() throws IOException {
File file = testUtil.getFileFromResources();
if (file == null)
return null;
String line;
List<String> stringList = new ArrayList<>();
try (FileReader reader = new FileReader(file);
BufferedReader br = new BufferedReader(reader))
{
while ((line = br.readLine()) != null)
{
stringList.add(line);
System.out.println(line);
}
}
return stringList;
}
File Reading Class:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.net.URL;
public class TestUtil
{
public File getFileFromResources()throws IOException
{
TestUtil testUtil = new TestUtil();
File file = testUtil.getFileFromResources("testdata.txt");
return file;
}
// get file from classpath, resources folder
private File getFileFromResources(String fileName) {
ClassLoader classLoader = getClass().getClassLoader();
URL resource = classLoader.getResource(fileName);
if (resource == null) {
throw new IllegalArgumentException("file is not found!");
} else {
return new File(resource.getFile());
}
}
}
How can i generate test cases dynamically by taking input from file?

If you can convert your file JUnitParams supports loading data from a CSV.
Using an example from the above link's repository:
public class PersonTest {
#Test
#FileParameters("src/test/resources/test.csv")
public void loadParamsFromCsv(int age, String name) {
assertFalse(age > 120);
}
}

Related

How to get file name having particular content in it using java?

Here I am trying to read a folder containing .sql files and I am getting those files in an array, now my requirement is to read every file and find particular word like as join if join is present in the file return filename or else discard , someone can pls help me with this ..
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.stream.Stream;
public class Filter {
public static List<String> textFiles(String directory) {
List<String> textFiles = new ArrayList<String>();
File dir = new File(directory);
for (File file : dir.listFiles()) {
if (file.getName().endsWith((".sql"))) {
textFiles.add(file.getName());
}
}
return textFiles;
}
public static void getfilename(String directory) throws IOException {
List<String> textFiles = textFiles(directory);
for (String string : textFiles) {
Path path = Paths.get(string);
try (Stream<String> streamOfLines = Files.lines(path)) {
Optional<String> line = streamOfLines.filter(l -> l.contains("join")).findFirst();
if (line.isPresent()) {
System.out.println(path.getFileName());
} else
System.out.println("Not found");
} catch (Exception e) {
}
}
}
public static void main(String[] args) throws IOException {
getfilename("/home/niteshb/wave1-master/wave1/sql/scripts");
}
}
You can search word in file as belwo, pass the path of file
try(Stream <String> streamOfLines = Files.lines(path)) {
Optional <String> line = streamOfLines.filter(l ->
l.contains(searchTerm))
.findFirst();
if(line.isPresent()){
System.out.println(line.get()); // you can add return true or false
}else
System.out.println("Not found");
}catch(Exception e) {}
}
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.stream.Stream;
public class Filter {
public static List<String> textFiles(String directory) {
List<String> textFiles = new ArrayList<String>();
File dir = new File(directory);
for (File file : dir.listFiles()) {
if (file.getName().endsWith((".sql"))) {
textFiles.add(file.getAbsolutePath());
}
}
System.out.println(textFiles.size());
return textFiles;
}
public static String getfilename(String directory) throws IOException {
List<String> textFiles = textFiles(directory);
for (String string : textFiles) {
Path path = Paths.get(string);
try (Stream<String> streamOfLines = Files.lines(path)) {
Optional<String> line = streamOfLines.filter(l -> l.contains("join")).findFirst();
if (line.isPresent()) {
System.out.println(path.getFileName());
} else
System.out.println("");
} catch (Exception e) {
}
}
return directory;
}
public static void main(String[] args) throws IOException {
getfilename("/home/wave1-master/wave1/sql/");
}
}

StreamingFileSink bulk writer results in some checkpoint error when running in AWS EMR

Unable to use StreamingFileSink and store incoming events in compressed fashion.
I am trying to use StreamingFileSink to write unbounded event stream to S3. In the process, I would like to compress the data to make better use of storage size available.
I wrote a compressed string writer, by borrowing some code from SequenceFileWriterFactory from flink. It fails with the exception I described below.
If I try to use BucketingSink, it works great.
Using BucketingSink, I approached compressed string write as below. Again, I borrowed this code from some other pull request.
import org.apache.flink.streaming.connectors.fs.StreamWriterBase;
import org.apache.flink.streaming.connectors.fs.Writer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import java.io.IOException;
public class CompressionStringWriter<T> extends StreamWriterBase<T> implements Writer<T> {
private static final long serialVersionUID = 3231207311080446279L;
private String codecName;
private String separator;
public String getCodecName() {
return codecName;
}
public String getSeparator() {
return separator;
}
private transient CompressionOutputStream compressedOutputStream;
public CompressionStringWriter(String codecName, String separator) {
this.codecName = codecName;
this.separator = separator;
}
public CompressionStringWriter(String codecName) {
this(codecName, System.lineSeparator());
}
protected CompressionStringWriter(CompressionStringWriter<T> other) {
super(other);
this.codecName = other.codecName;
this.separator = other.separator;
}
#Override
public void open(FileSystem fs, Path path) throws IOException {
super.open(fs, path);
Configuration conf = fs.getConf();
CompressionCodecFactory codecFactory = new CompressionCodecFactory(conf);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
if (codec == null) {
throw new RuntimeException("Codec " + codecName + " not found");
}
Compressor compressor = CodecPool.getCompressor(codec, conf);
compressedOutputStream = codec.createOutputStream(getStream(), compressor);
}
#Override
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
} else {
super.close();
}
}
#Override
public void write(Object element) throws IOException {
getStream();
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
#Override
public CompressionStringWriter<T> duplicate() {
return new CompressionStringWriter<>(this);
}
}
BucketingSink<DeviceEvent> bucketingSink = new BucketingSink<>("s3://"+ this.bucketName + "/" + this.objectPrefix);
bucketingSink
.setBucketer(new OrgIdBasedBucketAssigner())
.setWriter(new CompressionStringWriter<DeviceEvent>("Gzip", "\n"))
.setPartPrefix("file-")
.setPartSuffix(".gz")
.setBatchSize(1_500_000);
The one with BucketingSink works.
Now my code snippets using StreamingFileSink involves the below set of code.
import org.apache.flink.api.common.serialization.BulkWriter;
import java.io.IOException;
public class CompressedStringBulkWriter<T> implements BulkWriter<T> {
private final CompressedStringWriter compressedStringWriter;
public CompressedStringBulkWriter(final CompressedStringWriter compressedStringWriter) {
this.compressedStringWriter = compressedStringWriter;
}
#Override
public void addElement(T element) throws IOException {
this.compressedStringWriter.write(element);
}
#Override
public void flush() throws IOException {
this.compressedStringWriter.flush();
}
#Override
public void finish() throws IOException {
this.compressedStringWriter.close();
}
}
import org.apache.flink.api.common.serialization.BulkWriter;
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
public class CompressedStringBulkWriterFactory<T> implements BulkWriter.Factory<T> {
private SerializableHadoopConfiguration serializableHadoopConfiguration;
public CompressedStringBulkWriterFactory(final Configuration hadoopConfiguration) {
this.serializableHadoopConfiguration = new SerializableHadoopConfiguration(hadoopConfiguration);
}
#Override
public BulkWriter<T> create(FSDataOutputStream out) throws IOException {
return new CompressedStringBulkWriter(new CompressedStringWriter(out, serializableHadoopConfiguration.get(), "Gzip", "\n"));
}
}
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.core.fs.Path;
import org.apache.flink.runtime.fs.hdfs.HadoopFileSystem;
import org.apache.flink.util.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.Serializable;
public class CompressedStringWriter<T> implements Serializable {
private static final Logger LOG = LoggerFactory.getLogger(CompressedStringWriter.class);
private static final long serialVersionUID = 2115292142239557448L;
private String separator;
private transient CompressionOutputStream compressedOutputStream;
public CompressedStringWriter(FSDataOutputStream out, Configuration hadoopConfiguration, String codecName, String separator) {
this.separator = separator;
try {
Preconditions.checkNotNull(hadoopConfiguration, "Unable to determine hadoop configuration using path");
CompressionCodecFactory codecFactory = new CompressionCodecFactory(hadoopConfiguration);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
Preconditions.checkNotNull(codec, "Codec " + codecName + " not found");
LOG.info("The codec name that was loaded from hadoop {}", codec);
Compressor compressor = CodecPool.getCompressor(codec, hadoopConfiguration);
this.compressedOutputStream = codec.createOutputStream(out, compressor);
LOG.info("Setup a compressor for codec {} and compressor {}", codec, compressor);
} catch (IOException ex) {
throw new RuntimeException("Unable to compose a hadoop compressor for the path", ex);
}
}
public void flush() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.flush();
}
}
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
}
}
public void write(T element) throws IOException {
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
}
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
public class SerializableHadoopConfiguration implements Serializable {
private static final long serialVersionUID = -1960900291123078166L;
private transient Configuration hadoopConfig;
SerializableHadoopConfiguration(Configuration hadoopConfig) {
this.hadoopConfig = hadoopConfig;
}
Configuration get() {
return this.hadoopConfig;
}
// --------------------
private void writeObject(ObjectOutputStream out) throws IOException {
this.hadoopConfig.write(out);
}
private void readObject(ObjectInputStream in) throws IOException {
final Configuration config = new Configuration();
config.readFields(in);
if (this.hadoopConfig == null) {
this.hadoopConfig = config;
}
}
}
My actual flink job
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties kinesisConsumerConfig = new Properties();
...
...
DataStream<DeviceEvent> kinesis =
env.addSource(new FlinkKinesisConsumer<>(this.streamName, new DeviceEventSchema(), kinesisConsumerConfig)).name("source")
.setParallelism(16)
.setMaxParallelism(24);
final StreamingFileSink<DeviceEvent> bulkCompressStreamingFileSink = StreamingFileSink.<DeviceEvent>forBulkFormat(
path,
new CompressedStringBulkWriterFactory<>(
BucketingSink.createHadoopFileSystem(
new Path("s3a://"+ this.bucketName + "/" + this.objectPrefix),
null).getConf()))
.withBucketAssigner(new OrgIdBucketAssigner())
.build();
deviceEventDataStream.addSink(bulkCompressStreamingFileSink).name("bulkCompressStreamingFileSink").setParallelism(16);
env.execute();
I expect data to be saved in S3 as multiple files. Unfortunately no files are being created.
In the logs, I see below exception
2019-05-15 22:17:20,855 INFO org.apache.flink.runtime.taskmanager.Task - Sink: bulkCompressStreamingFileSink (11/16) (c73684c10bb799a6e0217b6795571e22) switched from RUNNING to FAILED.
java.lang.Exception: Could not perform checkpoint 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:595)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:396)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:292)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:200)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not complete snapshot 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:422)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1113)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1055)
at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:729)
at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:641)
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:586)
... 8 more
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
at org.apache.flink.streaming.api.functions.sink.filesystem.PartFileWriter.closeForCommit(PartFileWriter.java:71)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:63)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:239)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:280)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:253)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:244)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:235)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.snapshotState(StreamingFileSink.java:347)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)
So wondering, what am I missing.
I am using AWS EMR latest (5.23).
In CompressedStringBulkWriter#close(), you are calling the close() method on the CompressionCodecStream which also closes the underlying the stream i.e. Flink's FSDataOutputStream. It has to be opened for the checkpointing to be done properly by Flink's internal to guarantee recoverable stream. That is why you are getting
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
So instead of compressedOutputStream.close(), use compressedOutputStream.finish() which just flushes everything that's in buffer to the outputstream without closing it. BTW, there is an inbuilt HadoopCompressionBulkWriter made available in the latest version Flink, you can also use that.

dynamic header CSVParser

Below code parses the CSV records if the header is always known in advance and we can declare the array values for FILE_HEADER_MAPPING.
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withHeader(FILE_HEADER_MAPPING);
FileReader fileReader = new FileReader("file");
CSVParser csvFileParser = new CSVParser(fileReader, csvFileFormat);
Iterable<CSVRecord> records = csvFileParser.getRecords();
but how to create the CSVParser for the CSV files in which the headers differs for each csv file.
I will not know the header of the csv file to create with the format
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withHeader(FILE_HEADER_MAPPING);
I want to have a csv parser for each possible csv headers.
Please help me to solve this scenario.
package dfi.fin.dcm.syn.loantrading.engine.source.impl;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.AMOUNT;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.FCN;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.FEE_TYPE;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.LINE_TYPE;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.LINE_TYPE_VALUE_CARRY_EVT;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.MARKIT_ID;
import static dfi.fin.dcm.syn.loantrading.engine.task.impl.BackOfficeCSVHelper.VALUE_DATE;
import java.io.IOException;
import java.io.InputStream;
import java.math.BigDecimal;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.Calendar;
import java.util.List;
import com.csvreader.CsvReader.CatastrophicException;
import com.csvreader.CsvReader.FinalizedException;
import dfi.fin.dcm.syn.loantrading.engine.source.SourceException;
import dfi.fin.dcm.syn.loantrading.model.portfolio.Portfolio;
#Deprecated
public class CarryEventStreamSource extends AbstractInputStreamSource<CarryEventData> {
private static String [] headers = {LINE_TYPE,VALUE_DATE,MARKIT_ID,FEE_TYPE,AMOUNT};
private SimpleDateFormat dateFormat = null;
public CarryEventStreamSource(InputStream stream) {
super(stream);
dateFormat = new SimpleDateFormat("dd/MM/yy");
}
public CarryEventData readNextElementInternal() throws SourceException, IOException, CatastrophicException, FinalizedException {
//skipping all events which are not Carry
boolean loop = true;
while (loop) {
// skipping all events which are not Carry
if(getReader().readRecord() && !getReader().get(LINE_TYPE).trim().equals(LINE_TYPE_VALUE_CARRY_EVT)) {
loop = true;
} else {
loop = false;
}
}
//EOF?
if (getReader().get(LINE_TYPE).trim().equals(LINE_TYPE_VALUE_CARRY_EVT)) {
CarryEventData toReturn = new CarryEventData();
toReturn.setComputationDate(Calendar.getInstance().getTime());
try {
toReturn.setValueDate(getDateFormat().parse(getReader().get(VALUE_DATE).trim()));
} catch (ParseException e) {
throw new SourceException(e);
}
if (!getPortfolio().getMtmSourceType().equals(Portfolio.MTM_SOURCE_TYPE_NONE)) {
if (getReader().get(MARKIT_ID).trim() == null) {
throw new SourceException("Back Office file invalid data format: the markit id is missing on line "+getReader().getCurrentRecord());
}
toReturn.setTrancheMarkitId(getReader().get(MARKIT_ID).trim());
} else {
if (getReader().get(FCN)==null || "".equals(getReader().get(FCN).trim())) {
throw new SourceException("Back Office file invalid data format: missing loan tranche id on line "+getReader().getCurrentRecord());
}
toReturn.setTrancheMarkitId(getReader().get(FCN).trim());
}
if (getReader().get(FEE_TYPE).equals("")) {
toReturn.setFeeType(null);
} else {
toReturn.setFeeType(getReader().get(FEE_TYPE).trim());
}
if (getReader().get(AMOUNT)==null) {
throw new SourceException("Back Office file invalid data format: missing amount on line "+getReader().getCurrentRecord());
}
try {
toReturn.setAmount(new BigDecimal(getReader().get(AMOUNT)));
} catch (NumberFormatException ex) {
throw new SourceException(ex,"Back Office file invalid data format: invalid amount on line "+getReader().getCurrentRecord());
}
return toReturn;
}
// no carry found, null is returned
return null;
}
public SimpleDateFormat getDateFormat() {
return dateFormat;
}
public void setDateFormat(SimpleDateFormat dateFormat) {
this.dateFormat = dateFormat;
}
#Override
public char getDelimiter() {
return ',';
}
#Override
public List<String> getHeaderSet() {
return Arrays.asList(headers);
}
#Override
public String getName() {
return "File import";
package dfi.fin.dcm.syn.loantrading.engine.source.impl;
import java.io.IOException;
import java.io.InputStream;
import java.math.BigDecimal;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.Calendar;
import java.util.List;
import com.csvreader.CsvReader.CatastrophicException;
import com.csvreader.CsvReader.FinalizedException;
import dfi.fin.dcm.syn.loantrading.engine.source.SourceException;
import dfi.fin.dcm.syn.loantrading.model.common.LTCurrency;
import dfi.fin.dcm.syn.loantrading.model.engine.event.CurrencyEvent;
public class SpotForexRateStreamSource extends AbstractInputStreamSource<CurrencyEvent> {
private SimpleDateFormat dateFormat;
private static String [] headers = {"CURRENCY","DATE","MID"};
public SpotForexRateStreamSource(InputStream stream) {
super(stream);
dateFormat = new SimpleDateFormat("dd/MM/yy");
}
#Override
public CurrencyEvent readNextElementInternal() throws SourceException, IOException, FinalizedException, CatastrophicException {
//skipping all events which are not Trade
if (getReader().readRecord()) {
CurrencyEvent event = new CurrencyEvent();
//retrieving the currency
LTCurrency currency = getCurrencyDAO().getLTCurrencyByISOCode(getReader().get("CURRENCY"));
event.setCurrency(currency);
try {
event.setDate(getDateFormat().parse(getReader().get("DATE")));
} catch (ParseException e) {
throw new SourceException(e, "Parse error while reading currency event date");
}
event.setExchangeRate(new BigDecimal(getReader().get("MID")));
event.setComputationDate(Calendar.getInstance().getTime());
return event;
}
return null;
}
#Override
public char getDelimiter() {
return ';';
}
public SimpleDateFormat getDateFormat() {
return dateFormat;
}
public void setDateFormat(SimpleDateFormat dateFormat) {
this.dateFormat = dateFormat;
}
#Override
public List<String> getHeaderSet() {
return Arrays.asList(headers);
}
#Override
public String getName() {
return "CSV File";
}
}
}
}

PlayFramework how to access list from super controller

I am completing my university project, but I encountered weird problem. Since I am a student, apologize in the adavance if it is prosaic.
I have my BasicCommonController which has List backendErrors = new ArrayList<>()
, and I have another Controller which extends BasicCommonController, and I'm able to access backendErrors list from BasicCommonController, but I am not able
to put new Element to the list, wchich is always empty. I have tried to access via super.backendErrors, but it also does not work.
How to add some error to the super.backendErrors and access it in another Controllers
this is abstract controller:
package controllers;
import org.apache.commons.lang3.StringUtils;
import play.Logger;
import play.Play;
import play.mvc.Controller;
import java.util.ArrayList;
import java.util.List;
/**
* Created by vv on 22.04.2017.
*/
public class BasicAbstractController extends Controller {
public static final String GO_HOME = "/";
public List<String> backendErrors = new ArrayList<>();
public static String getPlaceToObserve(){
String place = Play.application().configuration().getString("storage.place");
if(StringUtils.isNotBlank(place)){
return place;
}
return StringUtils.EMPTY;
}
public static String getServerInstance(){
String instance = Play.application().configuration().getString("storage.place");
if(StringUtils.isNotBlank(instance)){
return instance;
}
return StringUtils.EMPTY;
}
}
this is example controller
package controllers;
import com.google.common.io.Files;
import com.sun.org.apache.regexp.internal.RE;
import constans.AppCommunicates;
import play.Logger;
import play.mvc.Http;
import play.mvc.Result;
import util.FileUtil;
import java.io.File;
import java.io.IOException;
import java.util.List;
/**
* Created by vv on 22.04.2017.
*/
public class FileUploadController extends BasicAbstractController {
public Result upload() {
Http.MultipartFormData<File> body = request().body().asMultipartFormData();
Http.MultipartFormData.FilePart<File> picture = body.getFile("picture");
if (picture != null) {
String fileName = picture.getFilename();
String contentType = picture.getContentType();
File file = picture.getFile();
File fileToSave = new File(getPlaceToObserve() + "/" + picture.getFilename());
try{
Files.copy(file,fileToSave);
}
catch (IOException ioe){
Logger.error("Unable to write file");
}
Logger.error("File Handled Cuccessfully");
return redirect(GO_HOME);
} else {
flash("error", "Missing file");
return badRequest();
}
}
public Result delete(String fileName){
List<File> files = FileUtil.getCurrentFileNames();
File fileToDelete = null;
for (File file : files) {
if(file.getName().equals(fileName)){
fileToDelete = file;
break;
}
}
boolean deletionResult = FileUtil.deleteGivenFile(fileToDelete);
if(!deletionResult){
// i am not able to add smthg here
backendErrors.add(AppCommunicates.UNABLE_TO_DELETE_FILE);
}
return redirect(GO_HOME);
}
}
I am not able to add nor access list from other controllers

Running an external python script from maven

I have to run a run a python script from a maven project. I created a temporary class with main method to check if it works as expected, used the process builder and it works if I specify the absolute path of the python script and then run the java class from eclipse using RUN as Java application.
If I change it getClass().getResourceAsStream("/scripts/script.py"), it throws an exception as it cannot locate the python script.
What would be the best place to place the python script and how can I access it in the Java class without specifying the complete path. Since I am new to maven, it could be due to the method used to execute the Java program.
package discourse.apps.features;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.json.simple.JSONObject;
import org.json.simple.parser.JSONParser;
import org.json.simple.parser.ParseException;
public class Test {
protected String scriptPath = "/Users/user1/project1/scripts/script.py";
protected String python3Path = "/Users/user1/.virtualenvs/python3/bin/python3";
public static void main(String[] args) throws IOException {
new Test().score();
}
public JSONObject score() {
String text1="a";
String text2="b";
JSONObject rmap =null;
try
{
String line= null;
String writedir=System.getProperty("user.dir")+ "/Tmp";
String pbCommand[] = { python3Path, scriptPath,"--stringa", text1, "--stringb",text2,"--writedir", writedir };
ProcessBuilder pb = new ProcessBuilder(pbCommand);
Process p = pb.start();
InputStream is = p.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
while ((line = br.readLine()) != null) {
JSONParser parser = new JSONParser();
rmap= (JSONObject) parser.parse(line);
}
} catch (IOException | ParseException ioe) {
System.err.println("Error running script");
ioe.printStackTrace();
System.exit(0);
}
return rmap;
}
}
Here is the output from pb command
pbCommand[0]:/Users/user1/.virtualenvs/python3/bin/python3
pbCommand[1]:displays the complete python script
import os,sys
from pyrouge import Rouge155
import json
from optparse import OptionParser
def get_opts():
parser = OptionParser()
parser.add_option("--stringa", dest="str_a",help="First string")
parser.add_option("--stringb", dest= "str_b",help="second string")
parser.add_option("--writedir", dest="write_dir", help="Tmp write directory for rouge")
(options, args) = parser.parse_args()
if options.str_a is None:
print("Error: requires string")
parser.print_help()
sys.exit(-1)
if options.str_b is None:
print("Error:requires string")
parser.print_help()
sys.exit(-1)
if options.write_dir is None:
print("Error:requires write directory for rouge")
parser.print_help()
sys.exit(-1)
return (options, args)
def readTextFile(Filename):
f = open(Filename, "r", encoding='utf-8')
TextLines=f.readlines()
f.close()
return TextLines
def writeTextFile(Filename,Lines):
f = open(Filename, "w",encoding='utf-8')
f.writelines(Lines)
f.close()
def rougue(stringa, stringb, writedirRouge):
newrow={}
r = Rouge155()
count=0
dirname_sys= writedirRouge +"rougue/System/"
dirname_mod=writedirRouge +"rougue/Model/"
if not os.path.exists(dirname_sys):
os.makedirs(dirname_sys)
if not os.path.exists(dirname_mod):
os.makedirs(dirname_mod)
Filename=dirname_sys +"string_."+str(count)+".txt"
LinesA=list()
LinesA.append(stringa)
writeTextFile(Filename, LinesA)
LinesB=list()
LinesB.append(stringb)
Filename=dirname_mod+"string_.A."+str(count)+ ".txt"
writeTextFile(Filename, LinesB)
r.system_dir = dirname_sys
r.model_dir = dirname_mod
r.system_filename_pattern = 'string_.(\d+).txt'
r.model_filename_pattern = 'string_.[A-Z].#ID#.txt'
output = r.convert_and_evaluate()
output_dict = r.output_to_dict(output)
newrow["rouge_1_f_score"]=output_dict["rouge_1_f_score"]
newrow["rouge_2_f_score"]=output_dict["rouge_2_f_score"]
newrow["rouge_3_f_score"]=output_dict["rouge_3_f_score"]
newrow["rouge_4_f_score"]=output_dict["rouge_4_f_score"]
newrow["rouge_l_f_score"]=output_dict["rouge_l_f_score"]
newrow["rouge_s*_f_score"]=output_dict["rouge_s*_f_score"]
newrow["rouge_su*_f_score"]=output_dict["rouge_su*_f_score"]
newrow["rouge_w_1.2_f_score"]=output_dict["rouge_w_1.2_f_score"]
rouge_dict=json.dumps(newrow)
print (rouge_dict)
def run():
(options, args) = get_opts()
stringa=options.str_a
stringb=options.str_b
writedir=options.write_dir
rougue(stringa, stringb, writedir)
if __name__ == '__main__':
run()
pbCommand[2]:--stringa
pbCommand[3]:a
pbCommand[4]:--stringb
pbCommand[5]:b
pbCommand[6]:--writedir
pbCommand[7]:/users/user1/project1/Tmp
Put the script in the main/resources folder it will then be copied to the target folder.
Then make sure you use something like the com.google.common.io.Resources class, which you can add with
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava-io</artifactId>
<version>r03</version>
</dependency>
I then have a class like this which helps to convert resource files to Strings:
import java.net.MalformedURLException;
import java.net.URI;
import java.net.URL;
import com.google.common.base.Charsets;
import com.google.common.io.Resources;
public class FileUtil
{
public static String convertResourceToString(URL url)
{
try
{
return Resources.toString(url, Charsets.UTF_8);
}
catch (Exception e)
{
return null;
}
}
public static String convertResourceToString(String path)
{
return convertResourceToString(Resources.getResource(path));
}
public static String convertResourceToString(URI url)
{
try
{
return convertResourceToString(url.toURL());
}
catch (MalformedURLException e)
{
return null;
}
}
}
Some advice if you are learning maven try using it instead of the IDE to run and package your application, that is what it is suppose to do. Then once you are confident that the application will function as a packaged jar then just use the IDE to run it.

Categories

Resources