How to parse logs written by multiple threads? - java

I have an interesting problem and would appreciate your thoughts for the best solution.
I need to parse a set of logs. The logs are produced by a multi-threaded program and a single process cycle produces several lines of logs.
When parsing these logs I need to pull out specific pieces of information from each process - naturally this information is across the multiple lines (I want to compress these pieces of data into a single line). Due to the application being multi-threaded, the block of lines belonging to a process can be fragmented as other processes at written to the same log file at the same time.
Fortunately, each line gives a process ID so I'm able to distinguish what logs belong to what process.
Now, there are already several parsers which all extend the same class but were designed to read logs from a single threaded application (no fragmentation - from original system) and use a readLine() method in the super class. These parsers will keep reading lines until all regular expressions have been matched for a block of lines (i.e. lines written in a single process cycle).
So, what can I do with the super class so that it can manage the fragmented logs, and ensure change to the existing implemented parsers is minimal?

It sounds like there are some existing parser classes already in use that you wish to leverage. In this scenario, I would write a decorator for the parser which strips out lines not associated with the process you are monitoring.
It sounds like your classes might look like this:
abstract class Parser {
public abstract void parse( ... );
protected String readLine() { ... }
}
class SpecialPurposeParser extends Parser {
public void parse( ... ) {
// ... special stuff
readLine();
// ... more stuff
}
}
And I would write something like:
class SingleProcessReadingDecorator extends Parser {
private Parser parser;
private String processId;
public SingleProcessReadingDecorator( Parser parser, String processId ) {
this.parser = parser;
this.processId = processId;
}
public void parse( ... ) { parser.parse( ... ); }
public String readLine() {
String text = super.readLine();
if( /*text is for processId */ ) {
return text;
}
else {
//keep readLine'ing until you find the next line and then return it
return this.readLine();
}
}
Then any occurrence you want to modify would be used like this:
//old way
Parser parser = new SpecialPurposeParser();
//changes to
Parser parser = new SingleProcessReadingDecorator( new SpecialPurposeParser(), "process1234" );
This code snippet is simple and incomplete, but gives you the idea of how the decorator pattern could work here.

I would write a simple distributor that reads the log file line by line and stores them in different VirtualLog objects in memory -- a VirtualLog being a kind of virtual file, actually just a String or something that the existing parsers can be applied to. The VirtualLogs are stored in a Map with the process ID (PID) as the key. When you read a line from the log, check if the PID is already there. If so, add the line to the PID's respective VirtualLog. If not, create a new VirtualLog object and add it to the Map. Parsers run as separate Threads, one on every VirtualLog. Every VirtualLog object is destroyed as soon as it has been completely parsed.

You need to store lines temporarily in a queue where a single thread consumes them and passes them on once each set has been completed. If you have no way of knowing the if a set is complete or not by either the number of lines or the content of the lines, you could consider using a sliding window technique where you don't collect the individual sets until after a certain time has passed.

One simple solution could be to read the file line by line and write several files, one for each process id. The list of process id's can be kept in a hash-map in memory to determine if a new file is needed or in which already created file the lines for a certain process id will go. Once all the (temporary) files are written, the existing parsers can do the job on each one.

Would something like this do it? It runs a new Thread for each Process ID in the log file.
class Parser {
String currentLine;
Parser() {
//Construct parser
}
synchronized String readLine(String processID) {
if (currentLine == null)
currentLine = readLinefromLog();
while (currentline != null && ! getProcessIdFromLine(currentLine).equals(processId)
wait();
String line = currentLine;
currentLine = readLinefromLog();
notify();
return line;
}
}
class ProcessParser extends Parser implements Runnable{
String processId;
ProcessParser(String processId) {
super();
this.processId = processId;
}
void startParser() {
new Thread(this).start();
}
public void run() {
String line = null;
while ((line = readLine()) != null) {
// process log line here
}
}
String readLine() {
String line = super.readLine(processId);
return line;
}

Related

Can ChronicleQueue tailers for two different queues be interleaved?

I have two separate ChronicleQueues that were created by independent threads that monitor web socket streams in a Java application. When I read each queue independently in a separate single-thread program, I can traverse each entire queue as expected - using the following minimal code:
final ExcerptTailer queue1Tailer = queue1.createTailer();
final ExcerptTailer queue2Tailer = queue2.createTailer();
while (true)
{
try( final DocumentContext context = queue1Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter1++;
queue1Data = context.wire()
.bytes()
.readObject(Queue1Data.class);
queue1Writer.write(String.format("%d\t%d\t%d%n", counter1, queue1Data.getEventTime(), queue1Data.getEventContent()));
}
}
while (true)
{
try( final DocumentContext context = queue2Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter2++;
queue2Data = context.wire()
.bytes()
.readObject(Queue2Data.class);
queue2Writer.write(String.format("%d\t%d\t%d%n", counter2, queue2Data.getEventTime(), queue2Data.getEventContent()));
}
}
In the above, I am able to read all the Queue1Data objects, then all the Queue2Data objects and access values as expected. However, when I try to interleave reading the queues (read an object from one queue, based on a property of Queue1Data object (a time stamp), read Queue2Data objects until the first object that is after the time stamp (the limit variable below), of the active Queue1Data object is found - then do something with it) after only one object from the queue2Tailer is read, an exception is thrown .DecoratedBufferUnderflowException: readCheckOffset0 failed. The simplified code that fails is below (I have tried putting the outer while(true) loop inside and outside the the queue2Tailer try block):
final ExcerptTailer queue1Tailer = queue1Queue.createTailer("label1");
try( final DocumentContext queue1Context = queue1Tailer.readingDocument() )
{
final ExcerptTailer queue2Tailer = queue2Queue.createTailer("label2");
while (true)
{
try( final DocumentContext queue2Context = queue2Tailer.readingDocument() )
{
if ( isNull(queue2Context.wire()) )
{
terminate = true;
break;
}
queue2Data = queue2Context.wire()
.bytes()
.readObject(Queue2Data.class);
while(true)
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues
{ // but the second read fails
// cache a value
break;
}
}
// continue working with queu2Data object and cached values
} // end try block for queue2 tailer
} // end outer while loop
} // end outer try block for queue1 tailer
I have tried as above, and also with both Tailers created at the beginning of the function which does the processing (a private function executed when a button is clicked in a relatively simple Java application). Basically I took the loop which worked independently, and put it inside another loop in the function, expecting no problems. I thinking I am missing something crucial in how tailers are positioned and used to read objects, but I cannot figure out what it is - since the same basic code works when reading queues independently. The use of isNull(context.wire()) to determine when there are no more objects in a queue I got from one of the examples, though I am not sure this is the proper way to determine when there are no more objects in a queue when processing the queue sequentially.
Any suggestions would be appreciated.
You're not writing it correctly in the first instance.
Now, there's hardcore way of achieving what you are trying to achieve (that is, do everything explicitly, on lower level), and use MethodReader/MethodWriter magic rovided by Chronicle.
Hardcore way
Writing
// write first event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("first").text("Hello first");
}
// write second event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("second").text("Hello second");
}
This will write different types of messages into the same queue, and you will be able to easily distinguish those when reading.
Reading
StringBuilder reusable = new StringBuilder();
while (true) {
try (DocumentContext dc = tailer.readingDocument()) {
if (!dc.isPresent) {
continue;
}
dc.wire().readEventName(reusable);
if ("first".contentEquals(reusable)) {
// handle first
} else if ("second".contentEquals(reusable)) {
// handle second
}
// optionally handle other events
}
}
The Chronicle Way (aka Peter's magic)
This works with any marshallable types, as well as any primitive types and CharSequence subclasses (i.e. Strings), and Bytes. For more details have a read of MethodReader/MethodWriter documentation.
Suppose you have some data classes:
public class FirstDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
public class SecondDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
Then, to write those data classes to the queue, you just need to define the interface, like this:
interface EventHandler {
void first(FirstDataType first);
void second(SecondDataType second);
}
Writing
Then, writing data is as simple as:
final EventHandler writer = appender.methodWriterBuilder(EventHandler).get();
// assuming firstDatum and secondDatum are created earlier
writer.first(firstDatum);
writer.second(secondDatum);
What this does is the same as in the hardcore section - it writes event name (which is taken from the method name in method writer, i.e. "first" or "second" correspondingly), and then the actual data object.
Reading
Now, to read those events from the queue, you need to provide an implementation of the above interface, that will handle corresponding event types, e.g.:
// you implement this to read data from the queue
private class MyEventHandler implements EventHandler {
public void first(FirstDataType first) {
// handle first type of events
}
public void second(SecondDataType second) {
// handle second type of events
}
}
And then you read as follows:
EventHandler handler = new MyEventHandler();
MethodReader reader = tailer.methodReader(handler);
while (true) {
reader.readOne(); // readOne returns boolean value which can be used to determine if there's no more data, and pause if appropriate
}
Misc
You don't have to use the same interface for reading and writing. In case you want to only read events of second type, you can define another interface:
interface OnlySecond {
void second(SecondDataType second);
}
Now, if you create a handler implementing this interface and give it to tailer#methodReader() call, the readOne() calls will only process events of second type while skipping all others.
This also works for MethodWriters, i.e. if you have several processes writing different types of data and one process consuming all that data, it is not uncommon to define multiple interfaces for writing data and then single interface extending all others for reading, e.g.:
interface FirstOut {
void first(String first);
}
interface SecondOut {
void second(long second);
}
interface ThirdOut {
void third(ThirdDataType third);
}
interface AllIn extends FirstOut, SecondOut, ThirdOut {
}
(I deliberately used different data types for method parameters to show how it is possible to use various types)
With further testing, I have found that nested loops to read multiple queues which contain data in different POJO classes is possible. The problem with the code in the above question is that queue1Context is obtained once, OUTSIDE the loop that I expected to read queue1Data objects. My fundamental misconception was that DocumentContext objects managed stepping through objects in a queue, whereas actually ExcerptTailer objects manage stepping (maintaining indices) when reading a queue sequentially.
In case it might help someone else just getting started with ChronicleQueues, the inner loop in the original question should be:
while(true)
{
try (final DocumentContext queue1Context = queue1Tailer() )
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues as expected
{ // and second and subsequent reads now succeed
// cache a value
break;
}
}
}
And of course the outer-most try block containing queue1Context (in the original code) should be removed.

Unable to read one file and generate one output CSV at a time in Java

I am reading files from one folder, each folder can have various lines of records. So I am creating two arrayList here:
List<Address> addressList;
List<List<Address>> addressLists;
I am calling readIncomingFiles method which returns object of List<List<Address>>
// The code structure of method which reads incomingFiles
public List<List<Address>> readIncomingFiles() {
//some lines of code Processing data
if (!(addressList == null || addressList.size() == 0)) {
addressLists.add(addressList);
}
return addressLists;
}
Now in addressLists I have records from all files with all records. In main method I have process method where first it reads my objects addressLists which as all records. Suppose three files are there with three record each, it will have total 9 records.
void process() //main method {
this.addressLists=this.readIncomingFiles();
List<String> outgoingFileNames = this.getOutgoingFileName();
//Here I am creating a list for all outgoing files which will be generated and kept in destination folder.from getOutgoingFileName method
for (String outgoingFile : outgoingFileNames) {
if(validate file if file contains csv in output generated file name)
then call ProcessFile
ProcessFile()
for (List<AddressDto> listOfAddress : this.addressLists) {
for (AddressDto address : listOfAddress) {
this.csvOut = new OutputCsvDataDto();
//Process files and Records.
// Here OutputCsvDataDto returns data drom result generating method which writes records in OutputCsvDataDto List.
}
The problem is it reads all files and all records as it returns List<List<Address>, also the method getoutgoing file generates 3 output file one a time and returns a list. The code structure for outgoingfile method is pasted below:
public List<String> getOutgoingFileName {
for (File incomingFile : incomingFileFolder.listFiles()) {
outgoingFilenames.add("results_" + incomingFile.getName());
}
}
How can I read one record at a time? If I read one record at a time, how will I process other records? I am new in Java.
I'm not sure if I got you correctly, but let's sum it up:
there's one folder with many files,
these files contain variable count of lines,
each line is a record of some kind (presumably post address),
you want read all these files, process it one-by-one, by line and save output to a file, which name bases somehow on the input file
If all of the above is correct, then first let's start with the modification of the whole process, as currently you read all the files into memory and this is something you don't really want to do. What if each file takes, like 4GiB? So better approach is to process this whole thing in a manner more reassembling a "pipeline" than a "storage".
So, it goes more or less like this:
public static void main(String[] args) {
File incomingDir = new File(args[0]);
File outgoingDir = new File(args[1]);
for (File f : incomingDir.listFiles()) {
processFile(f, outgoingDir);
}
}
private static void processFile(File incomingFile, File outgoingDir) {
File outgoingFile = new File(outputDir, "results-" + incomingFile.getName());
for (String line : /* read lines from incomingFile */) {
Address address = parseAddress(line);
/* write address to outgoingFile */
}
}
private static Address parseAddress(String line) {
Address address;
/* do parsing */
return address;
}
Of course you need to adapt the code, probably use BufferedReader in while loop (instead of for loop in processFile as above), but it's more to sketch out the concept than give a "copy-paste" answer. And think how you can make this code work in parallel and if always does it make sense.

How can you turn a String representation of a class into a bona fide Object?

I'm writing a program that needs to have Command Objects. A Command contains a String for its name, and an AbstractAction that represents what the Command actually does. Furthermore, a Command has a method, init(), used higher up in the program's hierarchy that instantiates variables for the Command's use (to provide access to the GUI, network, and so on), and a method, execute(), that executes the AbstractAction on a special Thread. Here is an example of creating and using a Command:
Command c = new Command("Test",
new AbstractAction() {
public void actionPerformed(ActionEvent a) {
System.out.println("Hello world!");
}
});
At this point, calling "c.execute();" will print out "Hello world!", as expected.
My goal is to have a text file with pairs of values, which can be parsed to generate a String name and an AbstractAction action. Once that has been done, another class will go through the found names and actions, create Command Objects for each one, and add them to the list of commands in the program, where they can then be used as normal.
Right now, my problem is that I read in a String that represents the body of the private AbstractAction above- but there isn't an easy way to actually convert the String into an actual AbstractAction Object.
One potential idea was creating a temporary java file with the AbstractAction String representation, compiling it, creating a new AbstractAction from it, and then get that reference using reflection, but that seems like overkill. Another was to directly modify the source of the file that parses through the file, so that it would have the code of the AbstractAction written out, but again, this is a bit crazy.
I've tried a few other implementations, including forcing the user to create a subclass of Command, putting their source into a special program folder, and then creating the Commands on initialisation, but this ended up being a lot of work for the user (lots of redundant code).
Please let me know if there's a better way to implement what I want to do- or if there's an easier way to turn the String of the source into an inner Object as above.
Edit 1:
Here is an example of what the text file would look like:
//Anything outside of quotes is a comment
"Foo", "System.out.println("Hello world!");"
"Bar", "network.sendOverAFile(new File("test.txt"));"
From here, the parser (on startup) would read through the file and extract "Foo" as a String name, and "System ... ;" as a String action. I need to turn action into the code in the body of the AbstractAction, as seen above when creating the Command.
The same would be done for Bar; Bar uses one of the variables passed by init().
As for the subclass implementation I tried, the user would have to create their own subclass of Command, and put it into a source folder. A subclass would look something like this:
public class TestCommand extends Command {
public TestCommand() {
super("Test", new AbstractAction() {
public void actionPerformed(ActionEvent a) {
System.out.println("Hello!");
}
});
}
}
This would then be put into a source directory, among every other subclassed Command, and compiled. The parser would go through the compiled code segments, and add the relevant information to an array. Every time a Command would normally be executed, the parser scans through the list of all names, and if there is a match, execute the relevant AbstractAction. This works, but involves a ton of references to external classes (which will probably slow down the program with dozens of commands), and is two or three times as much work for the users making the plugin. As a result, I felt it would be much easier to use the text file technique above, but I don't know how to turn a String representation of the code into the code itself; Ergo my initial question.
This sounds like a case of overengineering. Do you really need this much flexibility at runtime, or do you simply have a lot of commands and you want an easy way to refer to them in a file?
If it's the latter, your text file doesn't need to contain the code; it just needs to contain symbolic identifiers corresponding to that code. Those identifiers should exist in your code as enum constants:
public enum Command { FOO, BAR }
You should create all of your actions in code, and place those actions in a Map using the enum constants as keys. Your file can then refer to the actions by those enum constants:
public List<Action> parseActions(Path file)
throws IOException {
List<Action> actions = new ArrayList<>();
try (BufferedReader reader =
Files.newBufferedReader(file, Charset.defaultCharset())) {
String line;
while ((line = reader.readLine()) != null) {
Command command = Command.valueOf(line);
Action action = getAction(command);
actions.add(action);
}
}
return actions;
}
private Map<Command, Action> allActions;
private Action getAction(Command command) {
Objects.requireNonNull(command, "Command cannot be null");
if (allActions == null) {
allActions = new EnumMap<>(Command.class);
allActions.put(Command.FOO, new AbstractAction() {
public void actionPerformed(ActionEvent event) {
System.out.println("Hello world!");
}
};
allActions.put(Command.BAR, new AbstractAction() {
public void actionPerformed(ActionEvent event) {
network.sendOverAFile(new File("test.txt"));
}
};
// Safety check
if (!allActions.keySet().containsAll(
EnumSet.allOf(Command.class))) {
throw new RuntimeException(
"Not every Command constant has an associated Action");
}
}
return allActions.get(command);
}
To conform to the above, your text file would simply contain:
FOO
BAR
If you really and truly need fully dynamic code that can be read from a text file, bear in mind that it is a tremendous security hole. In fact, it is the very definition of code injection: anyone can place arbitrary code (including things like Runtime.getRuntime().exec("rd /s/q C:\\Windows\\System32") or Runtime.getRuntime().exec("rm -rf ~")) in a file and your program will gladly run it.
If you're still sure that you want to do it, you'd probably want to use the JavaScript engine that comes with every Java runtime:
public List<Action> parseActions(Path file)
throws IOException {
List<Action> actions = new ArrayList<>();
final ScriptEngine engine =
new ScriptEngineManager().getEngineByName("JavaScript");
Bindings bindings = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bindings.put("network", myNetwork);
try (BufferedReader reader =
Files.newBufferedReader(file, Charset.defaultCharset())) {
String line;
while ((line = reader.readLine()) != null) {
String[] nameAndCode = line.split("\\s+", 2);
String name = nameAndCode[0];
final String code = nameAndCode[1];
Action action = new AbstractAction() {
public void actionPerformed(ActionEvent event) {
engine.eval(code);
}
};
actions.add(action);
}
}
return actions;
}
Each line in your file would contain a command name followed by JavaScript code. So it might look like this:
Foo importClass(java.lang.System); System.out.println('Hello world!');
Bar importClass(java.io.File); network.sendOverAFile(new File('test.txt'));
Another major disadvantage of doing this, in my opinion, is that the code won't benefit from compiler checks, and you certainly can't set breakpoints in that code from a debugger. All in all, it will be a considerable headache to debug and maintain.

Getting an InputStream to read more than once, regardless of markSupported()

I need to be able to re-use a java.io.InputStream multiple times, and I figured the following code would work, but it only works the first time.
Code
public class Clazz
{
private java.io.InputStream dbInputStream, firstDBInputStream;
private ArrayTable db;
public Clazz(java.io.InputStream defDB)
{
this.firstDBInputStream = defDB;
this.dbInputStream = defDB;
if (db == null)
throw new java.io.FileNotFoundException("Could not find the database at " + db);
if (dbInputStream.markSupported())
dbInputStream.mark(Integer.MAX_VALUE);
loadDatabaseToArrayTable();
}
public final void loadDatabaseToArrayTable() throws java.io.IOException
{
this.dbInputStream = firstDBInputStream;
if (dbInputStream.markSupported())
dbInputStream.reset();
java.util.Scanner fileScanner = new java.util.Scanner(dbInputStream);
String CSV = "";
for (int i = 0; fileScanner.hasNextLine(); i++)
CSV += fileScanner.nextLine() + "\n";
db = ArrayTable.createArrayTableFromCSV(CSV);
}
public void reloadDatabase()//A method called by the UI
{
try
{
loadDatabaseToArrayTable();
}
catch (Throwable t)
{
//Alert the user that an error has occurred
}
}
}
Note that ArrayTable is a class of mine, which uses arrays to give an interface for working with tables.
Question
In this program, the database is shown directly to the user immediately after the reloadDatabase() method is called, and so any solution involving saving the initial read to an object in memory is useless, as that will NOT refresh the data (think of it like a browser; when you press "Refresh", you want it to fetch the information again, not just display the information it fetched the first time). How can I read a java.io.InputStream more than once?
You can't necessarily read an InputStream more than once. Some implementations support it, some don't. What you are doing is checking the markSupported method, which is indeed an indicator if you can read the same stream twice, but then you are ignoring the result. You have to call that method to see if you can read the stream twice, and if you can't, make other arrangements.
Edit (in response to comment): When I wrote my answer, my "other arrangements" was to get a fresh InputStream. However, when I read in your comments to your question about what you want to do, I'm not sure it is possible. For the basics of the operation, you probably want RandomAccessFile (at least that would be my first guess, and if it worked, that would be the easiest) - however you will have file access issues. You have an application actively writing to a file, and another reading that file, you will have problems - exactly which problems will depend on the OS, so whatever solution would require more testing. I suggest a separate question on SO that hits on that point, and someone who has tried that out can perhaps give you more insight.
you never mark the stream to be reset
public Clazz(java.io.InputStream defDB)
{
firstDBInputStream = defDB.markSupported()?defDB:new BufferedInputStream(defDB);
//BufferedInputStream supports marking
firstDBInputStream.mark(500000);//avoid IOException on first reset
}
public final void loadDatabaseToArrayTable() throws java.io.IOException
{
this.dbInputStream = firstDBInputStream;
dbInputStream.reset();
dbInputStream.mark(500000);//or however long the data is
java.util.Scanner fileScanner = new java.util.Scanner(dbInputStream);
StringBuilder CSV = "";//StringBuilder is more efficient in a loop
while(fileScanner.hasNextLine())
CSV.append(fileScanner.nextLine()).append("\n");
db = ArrayTable.createArrayTableFromCSV(CSV.toString());
}
however you could instead keep a copy of the original ArrayTable and copy that when you need to (or even the created string to rebuild it)
this code creates the string and caches it so you can safely discard the inputstreams and just use readCSV to build the ArrayTable
private String readCSV=null;
public final void loadDatabaseToArrayTable() throws java.io.IOException
{
if(readCSV==null){
this.dbInputStream = firstDBInputStream;
java.util.Scanner fileScanner = new java.util.Scanner(dbInputStream);
StringBuilder CSV = "";//StringBuilder is more efficient in a loop
while(fileScanner.hasNextLine())
CSV.append(fileScanner.nextLine()).append("\n");
readCSV=CSV.toString();
fileScanner.close();
}
db = ArrayTable.createArrayTableFromCSV(readCSV);
}
however if you want new information you'll need to create a new stream to read from again

Is it possible to avoid temp files when a Java method expects Reader/Writer arguments?

I'm calling a method from an external library with a (simplified) signature like this:
public class Alien
{
// ...
public void munge(Reader in, Writer out) { ... }
}
The method basically reads a String from one stream and writes its results to the other. I have several strings which I need processed by this method, but none of them exist in the file system. The strings can get quite long (ca 300KB each). Ideally, I would like to call munge() as a filter:
public void myMethod (ArrayList<String> strings)
{
for (String s : strings) {
String result = alienObj.mungeString(s);
// do something with result
}
}
Unfortunately, the Alien class doesn't provide a mungeString() method, and wasn't designed to be inherited from. Is there a way I can avoid creating two temporary files every time I need to process a list of strings? Like, pipe my input to the Reader stream and read it back from the Writer stream, without actually touching the file system?
I'm new to Java, please forgive me if the answer is obvious to professionals.
You can easily avoid temporary files by using any/all of these:
CharArrayReader / CharArrayWriter
StringReader / StringWriter
PipedReader / PipedWriter
A sample mungeString() method could look like this:
public String mungeString(String input) {
StringWriter writer = new StringWriter();
alienObj.munge(new StringReader(input), writer));
return writer.toString();
}
StringReader
StringWriter
If you are welling to work with binary arrays in-memory like you do in C# then I think the PipedWriter & PipedReader are the most convenient way to do so. Check this:
Is it possible to avoid temp files when a Java method expects Reader/Writer arguments?

Categories

Resources