Mocking console using mockito - java

I am printing the list of books from a library using a printBooks operation class. I want to see if the proper output is written to console.
This is what I have tried so far. Can someone please explain what Im doing wrong here. Thanks in advance.
PrintBooksOperation.java
package tw51.biblioteca.io.menu;
import tw51.biblioteca.Lendable;
import tw51.biblioteca.Library;
import tw51.biblioteca.io.Input;
import tw51.biblioteca.io.Output;
import tw51.biblioteca.io.menu.home.MenuOptions;
import java.util.List;
import static tw51.biblioteca.ItemType.Book;
/**
* Prints the Items Of Type Book.
*/
public class PrintBooksOperation implements MenuOptions {
private Library library;
private Output writer;
#Override
public void execute(Library library, Input reader, Output writer) {
this.library = library;
this.writer = writer;
printBooks();
}
private void printBooks() {
writer.formattedHeadings();
writer.write("\n");
List<Lendable> items = library.listItems();
items.stream().filter(item -> item.isOfType(Book)).forEach(item -> {
writer.write("\n" + item.toFormattedString());
});
}
}
PrintBooksOperationTest.java
package tw51.biblioteca.io.menu;
import org.junit.Test;
import tw51.biblioteca.Book;
import tw51.biblioteca.Library;
import tw51.biblioteca.io.Input;
import tw51.biblioteca.io.Output;
import java.util.Arrays;
import java.util.LinkedList;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
/**
*
*/
public class PrintBooksOperationTest {
#Test
public void areTheBooksPrintedCorrectly() {
Input reader = mock(Input.class);
Output writer = mock(Output.class);
Book book = new Book("nin", "#123", "ghy", 2003);
Library library = new Library(new LinkedList<>(Arrays.asList(book)));
PrintBooksOperation print = new PrintBooksOperation();
print.execute(library, reader, writer);
verify(writer).write("");
}
}
Input and Output are interfaces that implement console read and write.
My Error Message:
Argument(s) are different! Wanted:
output.write(
""
);
-> at tw51.biblioteca.io.menu.PrintBooksOperationTest.areTheBooksPrintedCorrectly(PrintBooksOperationTest.java:28)
Actual invocation has different arguments:
output.write(
"
"
);
Why are the actual arguments empty? The Printoperation works when I run it. Is there something that I am doing wrong? Or is there another way to test the console??

When you call verify on the writer instance you're signalling that it should be called for the first time with the argument "".
From your implementation however you write to it several times
private void printBooks() {
writer.formattedHeadings();
writer.write("\n"); // <-- This is the first time
List<Lendable> items = library.listItems();
items.stream().filter(item -> item.isOfType(Book)).forEach(item -> {
writer.write("\n" + item.toFormattedString());
});
}
Note that the first time you call write the argument is actually "\n" which is a newline, this does not match and empty string and the test fails. Either change the test to check for a "\n" or change the method to print what you expect.

The error message says that the actual function call is returning extra whitespace (notice that the quotes are on different lines) while your "expected" value is an empty string ("").
You either need to add this whitespace to your expected portion or change your function.

Related

Save and read multiple types of setting in a text file

I want to have a setting system that I can read write and use variables from the are stored in a file.
To summarize, There is a class and inside that class is a list of settings.
When I make a setting I want to add it to the list so that I can write it to the text file later.
I also want to be able to get the setting value without casting it which would use generics.
So for boolSetting I would only need to do boolSetting.get() or boolSetting.value ect
To start with code I have already written I have the code to read and write to the file. This works perfect (I think). I just need help with the setting part. Here is the read and write to file.
package winter.settings;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.io.UnsupportedEncodingException;
import net.minecraft.src.Config;
import winter.Client;
public class WinterSettings {
public static File WinterSetting;
public static void readSettings() {
try {
File WinterSetting = new File(System.getProperty("user.dir"), "WinterSettings.txt");
BufferedReader bufferedreader = new BufferedReader(new InputStreamReader(new FileInputStream(WinterSetting), "UTF-8"));
String s = "";
while ((s = bufferedreader.readLine()) != null)
{
System.out.println(s);
String[] astring = s.split(":");
Client.modules.forEach(m ->{
if(m.name==astring[0]) {
m.settings.forEach(setting ->{
if(setting.name==astring[1]) {
setting.value=astring[2];
}
});
}
});
}
bufferedreader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void writeSettings() {
try {
File WinterSetting = new File(System.getProperty("user.dir"), "WinterSettings.txt");
PrintWriter printwriter = new PrintWriter(new FileWriter(WinterSetting));
Client.modules.forEach(m ->{
m.settings.forEach(setting ->{
printwriter.println(m.name+":"+setting.name+":"+setting.value);
});
});
printwriter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Pretty much how this works is I have a setting in a Module which just stores some information.
The setting has a name and a value
To write it I am just writing
The module name, the setting name, the setting value For example: ModuleName:SettingName:false
This works fine, but leads to the problem that I just don't know enough about generics. I can't find a way that works with writing reading and setting / getting. The setting should have a name and value. Some code I wrote is below I just don't know how to continue it.
public class Setting<T> {
public String name;
public T value;
public Setting(String name, T value) {
this.name = name;
this.value = value;
}
public T getValue() {
return value;
}
}
From here I have subclasses for each type of setting. Not sure if this is good programming or not.
Now I can set get / write, but when I read the value isn't updated correctly.
Right now I make a new setting like
private final BooleanSetting toggleSprint = new BooleanSetting("ToggleSprint", true);
There is one problem to this from what I can tell. First off when I try to add it to a list when initilizing I get an error.
Type mismatch: cannot convert from boolean to BooleanSetting.
In short: I need to be able to read write get and set a value in a setting object. This can be boolean / int / ect.
Above is some of my code to read / write to txt file. Setting class and what I have of making a new setting.
My 2 problems are that I read the settings correctly and when making them I can't add them to a list.
Use the Boolean.True static variable
new BooleanSetting("ToggleSprint", Boolean.TRUE);
or
Boolean.valueOf(true)

Not able to process kafka json message with Flink siddhi library

I am trying to create a simple application where the app will consume Kafka message do some cql transform and publish to Kafka and below is the code:
JAVA: 1.8
Flink: 1.13
Scala: 2.11
flink-siddhi: 2.11-0.2.2-SNAPSHOT
I am using library: https://github.com/haoch/flink-siddhi
input json to Kafka:
{
"awsS3":{
"ResourceType":"aws.S3",
"Details":{
"Name":"crossplane-test",
"CreationDate":"2020-08-17T11:28:05+00:00"
},
"AccessBlock":{
"PublicAccessBlockConfiguration":{
"BlockPublicAcls":true,
"IgnorePublicAcls":true,
"BlockPublicPolicy":true,
"RestrictPublicBuckets":true
}
},
"Location":{
"LocationConstraint":"us-west-2"
}
}
}
main class:
public class S3SidhiApp {
public static void main(String[] args) {
internalStreamSiddhiApp.start();
//kafkaStreamApp.start();
}
}
App class:
package flinksidhi.app;
import com.google.gson.JsonObject;
import flinksidhi.event.s3.source.S3EventSource;
import io.siddhi.core.SiddhiManager;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.siddhi.SiddhiCEP;
import org.json.JSONObject;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import static flinksidhi.app.connector.Consumers.createInputMessageConsumer;
import static flinksidhi.app.connector.Producer.*;
public class internalStreamSiddhiApp {
private static final String inputTopic = "EVENT_STREAM_INPUT";
private static final String outputTopic = "EVENT_STREAM_OUTPUT";
private static final String consumerGroup = "EVENT_STREAM1";
private static final String kafkaAddress = "localhost:9092";
private static final String zkAddress = "localhost:2181";
private static final String S3_CQL1 = "from inputStream select * insert into temp";
private static final String S3_CQL = "from inputStream select json:toObject(awsS3) as obj insert into temp;" +
"from temp select json:getString(obj,'$.awsS3.ResourceType') as affected_resource_type," +
"json:getString(obj,'$.awsS3.Details.Name') as affected_resource_name," +
"json:getString(obj,'$.awsS3.Encryption.ServerSideEncryptionConfiguration') as encryption," +
"json:getString(obj,'$.awsS3.Encryption.ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm') as algorithm insert into temp2; " +
"from temp2 select affected_resource_name,affected_resource_type, " +
"ifThenElse(encryption == ' ','Fail','Pass') as state," +
"ifThenElse(encryption != ' ' and algorithm == 'aws:kms','None','Critical') as severity insert into outputStream";
public static void start(){
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//DataStream<String> inputS = env.addSource(new S3EventSource());
//Flink kafka stream consumer
FlinkKafkaConsumer<String> flinkKafkaConsumer =
createInputMessageConsumer(inputTopic, kafkaAddress,zkAddress, consumerGroup);
//Add Data stream source -- flink consumer
DataStream<String> inputS = env.addSource(flinkKafkaConsumer);
SiddhiCEP cep = SiddhiCEP.getSiddhiEnvironment(env);
cep.registerExtension("json:toObject", io.siddhi.extension.execution.json.function.ToJSONObjectFunctionExtension.class);
cep.registerExtension( "json:getString", io.siddhi.extension.execution.json.function.GetStringJSONFunctionExtension.class);
cep.registerStream("inputStream", inputS, "awsS3");
inputS.print();
System.out.println(cep.getDataStreamSchemas());
//json needs extension jars to present during runtime.
DataStream<Map<String,Object>> output = cep
.from("inputStream")
.cql(S3_CQL1)
.returnAsMap("temp");
//Flink kafka stream Producer
FlinkKafkaProducer<Map<String, Object>> flinkKafkaProducer =
createMapProducer(env,outputTopic, kafkaAddress);
//Add Data stream sink -- flink producer
output.addSink(flinkKafkaProducer);
output.print();
try {
env.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Consumer class:
package flinksidhi.app.connector;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.json.JSONObject;
import java.util.Properties;
public class Consumers {
public static FlinkKafkaConsumer<String> createInputMessageConsumer(String topic, String kafkaAddress, String zookeeprAddr, String kafkaGroup ) {
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", kafkaAddress);
properties.setProperty("zookeeper.connect", zookeeprAddr);
properties.setProperty("group.id",kafkaGroup);
FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<String>(
topic,new SimpleStringSchema(),properties);
return consumer;
}
}
Producer class:
package flinksidhi.app.connector;
import flinksidhi.app.util.ConvertJavaMapToJson;
import org.apache.flink.api.common.serialization.SerializationSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.json.JSONObject;
import java.util.Map;
public class Producer {
public static FlinkKafkaProducer<Tuple2> createStringProducer(StreamExecutionEnvironment env, String topic, String kafkaAddress) {
return new FlinkKafkaProducer<Tuple2>(kafkaAddress, topic, new AverageSerializer());
}
public static FlinkKafkaProducer<Map<String,Object>> createMapProducer(StreamExecutionEnvironment env, String topic, String kafkaAddress) {
return new FlinkKafkaProducer<Map<String,Object>>(kafkaAddress, topic, new SerializationSchema<Map<String, Object>>() {
#Override
public void open(InitializationContext context) throws Exception {
}
#Override
public byte[] serialize(Map<String, Object> stringObjectMap) {
String json = ConvertJavaMapToJson.convert(stringObjectMap);
return json.getBytes();
}
});
}
}
I have tried many things but the code where the CQL is invoked is never called and doesn't even give any error not sure where is it going wrong.
The same thing if I do creating an internal stream source and use the same input json to return as string it works.
Initial guess: if you are using event time, are you sure you have defined watermarks correctly? As stated in the docs:
(...) an incoming element is initially put in a buffer where elements are sorted in ascending order based on their timestamp, and when a watermark arrives, all the elements in this buffer with timestamps smaller than that of the watermark are processed (...)
If this doesn't help, I would suggest to decompose/simplify the job to a bare minimum, for example just a source operator and some naive sink printing/logging elements. And if that works, start adding back operators one by one. You could also start by simplifying your CEP pattern as much as possible.
First of all thanks a lot #Piotr Nowojski , just because of your small pointer which no matter how many times I pondered over about event time , it did not came in my mind. So yes while debugging the two cases:
With internal datasource , where it was processing successfully, while debugging the flow , I identified that it was processing a watermark after it was processing the data, but it did not catch me that it was somehow managing the event time of the data implicitly.
With kafka as a datasource , while I was debugging I could very clearly see that it was not processing any watermark in the flow, but it did not occur to me that , it is happening because of the event time and watermark not handled properly.
Just adding a single line of code in the application code which I understood from below Flink code snippet:
#deprecated In Flink 1.12 the default stream time characteristic has been changed to {#link
* TimeCharacteristic#EventTime}, thus you don't need to call this method for enabling
* event-time support anymore. Explicitly using processing-time windows and timers works in
* event-time mode. If you need to disable watermarks, please use {#link
* ExecutionConfig#setAutoWatermarkInterval(long)}. If you are using {#link
* TimeCharacteristic#IngestionTime}, please manually set an appropriate {#link
* WatermarkStrategy}. If you are using generic "time window" operations (for example {#link
* org.apache.flink.streaming.api.datastream.KeyedStream#timeWindow(org.apache.flink.streaming.api.windowing.time.Time)}
* that change behaviour based on the time characteristic, please use equivalent operations
* that explicitly specify processing time or event time.
*/
I got to know that by default flink considers event time and for that watermark needs to be handled properly which I didn't so I added below link for setting the time characteristics of the flink execution environment:
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
and kaboom ... it started working , while this is deprecated and needs some other configuration, but thanks a lot , it was a great pointer and helped me a lot and I solved the issue..
Thanks again #Piotr Nowojski

Google Cloud Dataflow issue with writing the data (TextIO or DatastoreIO)

OK, everyone. Another Dataflow question from a Dataflow newbie. (Just started playing with it this week..)
I'm creating a datapipe to take in a list of product names and generate autocomplete data. The data processing part is all working fine, it seems, but I'm missing something obvious because when I add my last ".apply" to use either DatastoreIO or TextIO to write the data out, I'm getting a syntax error in my IDE that says the following:
"The method apply(DatastoreV1.Write) is undefined for the type ParDo.SingleOutput>,Entity>"
If gives me an option add a cast to the method receiver, but that obviously isn't the answer. Do I need to do some other step before I try to write the data out? My last step before trying to write the data is a call to an Entity helper for Dataflow to change my Pipeline structure from > to , which seems to me like what I'd need to write to Datastore.
I got so frustrated with this thing the last few days, I even decided to write the data to some AVRO files instead so I could just load it in Datastore by hand. Imagine how ticked I was when I got all that done and got the exact same error in the exact same place on my call to TextIO. That is why I think I must be missing something very obvious here.
Here is my code. I included it all for reference, but you probably just need to look at the main[] at the bottom. Any input would be greatly appreciated! Thanks!
MrSimmonsSr
package com.client.autocomplete;
import com.client.autocomplete.AutocompleteOptions;
import com.google.datastore.v1.Entity;
import com.google.datastore.v1.Key;
import com.google.datastore.v1.Value;
import static com.google.datastore.v1.client.DatastoreHelper.makeKey;
import static com.google.datastore.v1.client.DatastoreHelper.makeValue;
import org.apache.beam.sdk.coders.DefaultCoder;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionList;
import com.google.api.services.bigquery.model.TableRow;
import com.google.common.base.MoreObjects;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.gcp.datastore.DatastoreIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.DoFn.ProcessContext;
import org.apache.beam.sdk.transforms.DoFn.ProcessElement;
import org.apache.beam.sdk.extensions.jackson.ParseJsons;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.StreamingOptions;
import org.apache.beam.sdk.options.Validation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.List;
import java.util.ArrayList;
/*
* A simple Dataflow pipeline to create autocomplete data from a list of
* product names. It then loads that prefix data into Google Cloud Datastore for consumption by
* a Google Cloud Function. That function will take in a prefix and return a list of 10 product names
*
* Pseudo Code Steps
* 1. Load a list of product names from Cloud Storage
* 2. Generate prefixes for use with autocomplete, based on the product names
* 3. Merge the prefix data together with 10 products per prefix
* 4. Write that prefix data to the Cloud Datastore as a KV with a <String>, List<String> structure
*
*/
public class ClientAutocompletePipeline {
private static final Logger LOG = LoggerFactory.getLogger(ClientAutocompletePipeline.class);
/**
* A DoFn that keys each product name by all of its prefixes.
* This creates one row in the PCollection for each prefix<->product_name pair
*/
private static class AllPrefixes
extends DoFn<String, KV<String, String>> {
private final int minPrefix;
private final int maxPrefix;
public AllPrefixes(int minPrefix) {
this(minPrefix, 10);
}
public AllPrefixes(int minPrefix, int maxPrefix) {
this.minPrefix = minPrefix;
this.maxPrefix = maxPrefix;
}
#ProcessElement
public void processElement(ProcessContext c) {
String productName= c.element().toString();
for (int i = minPrefix; i <= Math.min(productName.length(), maxPrefix); i++) {
c.output(KV.of(productName.substring(0, i), c.element()));
}
}
}
/**
* Takes as input the top product names per prefix, and emits an entity
* suitable for writing to Cloud Datastore.
*
*/
static class FormatForDatastore extends DoFn<KV<String, List<String>>, Entity> {
private String kind;
private String ancestorKey;
public FormatForDatastore(String kind, String ancestorKey) {
this.kind = kind;
this.ancestorKey = ancestorKey;
}
#ProcessElement
public void processElement(ProcessContext c) {
// Initialize an EntityBuilder and get it a valid key
Entity.Builder entityBuilder = Entity.newBuilder();
Key key = makeKey(kind, ancestorKey).build();
entityBuilder.setKey(key);
// New HashMap to hold all the properties of the Entity
Map<String, Value> properties = new HashMap<>();
String prefix = c.element().getKey();
String productsString = "Products[";
// iterate through the product names and add each one to the productsString
for (String productName : c.element().getValue()) {
// products.add(productName);
productsString += productName + ", ";
}
productsString += "]";
properties.put("prefix", makeValue(prefix).build());
properties.put("products", makeValue(productsString).build());
entityBuilder.putAllProperties(properties);
c.output(entityBuilder.build());
}
}
/**
* Options supported by this class.
*
* <p>Inherits standard Beam example configuration options.
*/
public interface Options
extends AutocompleteOptions {
#Description("Input text file")
#Validation.Required
String getInputFile();
void setInputFile(String value);
#Description("Cloud Datastore entity kind")
#Default.String("prefix-product-map")
String getKind();
void setKind(String value);
#Description("Whether output to Cloud Datastore")
#Default.Boolean(true)
Boolean getOutputToDatastore();
void setOutputToDatastore(Boolean value);
#Description("Cloud Datastore ancestor key")
#Default.String("root")
String getDatastoreAncestorKey();
void setDatastoreAncestorKey(String value);
#Description("Cloud Datastore output project ID, defaults to project ID")
String getOutputProject();
void setOutputProject(String value);
}
public static void main(String[] args) throws IOException{
Options options = PipelineOptionsFactory.fromArgs(args).withValidation().as(Options.class);
// create the pipeline
Pipeline p = Pipeline.create(options);
PCollection<String> toWrite = p
// A step to read in the product names from a text file on GCS
.apply(TextIO.read().from("gs://sample-product-data/clean_product_names.txt"))
// Next expand the product names into KV pairs with prefix as key (<KV<String, String>>)
.apply("Explode Prefixes", ParDo.of(new AllPrefixes(2)))
// Apply a GroupByKey transform to the PCollection "flatCollection" to create "productsGroupedByPrefix".
.apply(GroupByKey.<String, String>create())
// Now format the PCollection for writing into the Google Datastore
.apply("FormatForDatastore", ParDo.of(new FormatForDatastore(options.getKind(),
options.getDatastoreAncestorKey()))
// Write the processed data to the Google Cloud Datastore
// NOTE: This is the line that I'm getting the error on!!
.apply(DatastoreIO.v1().write().withProjectId(MoreObjects.firstNonNull(
options.getOutputProject(), options.getOutputProject()))));
// Run the pipeline.
PipelineResult result = p.run();
}
}
I think you need another closing parenthesis. I've removed some of the extraneous bits and reindent according to the parentheses:
PCollection<String> toWrite = p
.apply(TextIO.read().from("..."))
.apply("Explode Prefixes", ...)
.apply(GroupByKey.<String, String>create())
.apply("FormatForDatastore", ParDo.of(new FormatForDatastore(
options.getKind(), options.getDatastoreAncestorKey()))
.apply(...);
Specifically, you need another parenthesis to close the apply("FormatForDatastore", ...). Right now, it is trying to call ParDo.of(...).apply(...) which doesn't work.

"Attributes and objects cannot be resolved" - error

The following code is for reading or writing files with java, but:
Eclipse prints these errors:
buffer_1 cannot be resolved to a variable
file_reader cannot be resolved
also other attributes...
what is wrong in this code here:
//Class File_RW
package R_2;
import java.io.File;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.lang.NullPointerException;
public class File_RW {
public File_RW() throws FileNotFoundException, NullPointerException {
File file_to_read = new File("C:/myfiletoread.txt");
FileReader file_reader = new FileReader(file_to_read);
int nr_letters = (int)file_to_read.length()/Character.BYTES;
char buffer_1[] = new char[nr_letters];
}
public void read() {
file_reader.read(buffer_1, 0, nr_letters);
}
public void print() {
System.out.println(buffer_1);
}
public void close() {
file_reader.close();
}
public File get_file_to_read() {
return file_to_read;
}
public int get_nr_letters() {
return nr_letters;
}
public char[] get_buffer_1() {
return buffer_1;
}
//...
}
//main method # class Start:
package R_2;
import java.io.File;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.lang.NullPointerException;
public class Start {
public static void main(String[] args) {
File_RW file = null;
try {
file = new File_RW();
} catch (NullPointerException e_1) {
System.out.println("File not found.");
}
//...
}
}
I can't find any mistake. I have also tried to include a try catch statement into the constructor of the class "File_RW", but the error messages were the same.
Yes, there are errors in your code - which are of really basic nature: you are declaring variables instead of fields.
Meaning: you have them in the constructor, but they need to go one layer up! When you declare an entity within a constructor or method, then it is a variable that only exists within that constructor/method.
If you want that multiple methods can make use of that entity, it needs to be a field, declared in the scope of the enclosing class, like:
class FileRW {
private File fileToRead = new File...
...
and then you can use your fields within all your methods! Please note: you can do the actual setup within your constructor:
class FileRW {
private File fileToRead;
public FileRW() {
fileToRead = ..
but you don't have to.
Finally: please read about java language conventions. You avoid using "_" within names (just for SOME_CONSTANT)!
javacode already running...thx
same program edited with c++ in visual Studio express...
visit the stackoverflow entry link:
c++ file read write-error: Microsoft Visual C++ Runtime libr..debug Assertion failed, expr. stream.valid()

Read my own printed line in console

Is there a way in which, from a function, I print a String in with System.out.print() and then read it from another function?
something like:
void printC(String foo){
System.out.print(foo);
}
void read(){
String c;
while(something){
printC(somethingElse);
c = System.console.readLine();
JOptionPane.showMessageDialog(c);
}
}
No, you can't. As other people have commented, you probably just want an internal data structure to connect different components.
In command-line programs, the standard input and standard output (plus standard error) are completely independent streams. It's typical for all three to be connected to a single virtual terminal, but they can be redirected independently from the shell, such as by using a pipeline or files.
Think about what if the input of your program is coming from a file and the output is being piped to another program; trying to "get back" the output doesn't make any sense.
Try PipedOutputStream.
import java.io.IOException;
import java.io.PipedInputStream;
import java.io.PipedOutputStream;
import java.io.PrintStream;
import java.util.Scanner;
import javax.swing.JFrame;
import javax.swing.JOptionPane;
public class Test extends JFrame {
void printC(String foo){
System.out.print(foo);
}
void read() throws IOException{
String c = "";
PipedOutputStream pipeOut = new PipedOutputStream();
PipedInputStream pipeIn = new PipedInputStream(pipeOut);
System.setOut(new PrintStream(pipeOut));
Scanner sc = new Scanner(pipeIn);
while(!c.equalsIgnoreCase("Quit")){
printC("Test\n");
c = sc.nextLine();
JOptionPane.showMessageDialog(this, c);
}
}
public static void main(String[] args) throws IOException {
Test t = new Test();
t.read();
}
}
Why do you want to do this? Is it so that you can print something to the screen or so that you can create events?
If you particularly want to pass messages to the screen AND also another part of your application; a simple solution could involve creating your own PrintStream class. You can deal with the object in the same way as you would otherwise deal with System.out (as that's a PrintStream too).
Something along the lines of this:
public class FancyStream extends PrintStream
{
LinkedList<String> messageQueue = new LinkedList<>();
#Override
public void println(String line)
{
System.out.println(line);
messageQueue.add(line);
}
public String getLine()
{
return messageQueue.pop();
}
}
However, if you want events (as you've suggested in the comments), this is not the way to do it!
You should take a look at the Observer pattern for dealing with events. The wikipedia article about this is here.
There's plenty of other resources to learn about the Observer pattern if you do a Google search. Java even has a built in Observable class and Observer interface that may solve your problem.

Categories

Resources