I have a string like:
1,2,3:3,4,5
The string on the left side of the delimiter needs to be compared to the string on the right side of the delimiter(:). Now when I mean compare, I actually mean to find if the elements in the right part (3,4,5) are present in the elements of the left part (1,2,3). The right part can contain duplicates and that's fine (evidently meaning I cannot use a HashSet). I've accomplished this (details below) but I need the fastest way to split and compare the above mentioned strings.
This is purely a performance based question to find out which method can be faster since the actual input that I will be using is huge (on either side). There would be only a single line and it will be read through stdin.
How I've accomplished this:
Read stdin.
Split using string.split and store the left part in a HashSet.
Store the right part in an ArrayList.
Iterate through the array list use contains() to check if the element is present in the HashSet.
Read input into byte[] array to hold the pointer on the side of your code.
Read byte by byte, computing integer elements on the way:
int b = inputBytes[p++];
int d = b - '0';
if (0 <= d) {
if (d <= 9) {
element = element * 10 + d;
} else {
// b == ':'
}
} else {
// b == ','
// add element to the hash; element = 0;
...
}
if (p == inputBytesLength) {
inputBytesLength = in.read(inputBytes);
if (inputBytesLength == 0) { ... }
p = 0;
}
Use int[] with length of sufficiently big power of two as hash:
// as add()
int h = element * 0x9E3779B9;
int i = h >>> (32 - hashSizePower);
while (hash[i] != 0) {
if (--i < 0) i += hashSize;
}
hash[i] = element;
// contains() similarly
Assuming a line of input fits in JVM heap, three common approaches to parsing strings from input in Java are:
java.util.Scanner
java.io.BufferedReader#readLine & java.util.StringTokenizer
java.io.BufferedReader#readLine & java.lang.String#split
It wasn’t obvious to me which approach was best for this problem, so I decided to try it out. I generated test data, implemented a parser for each approach, and timed the results.
Test Data
I generated 4 files of test data:
testdata_1k.txt - size 20KB
testdata_10k.txt - size 205KB
testdata_100k.txt - size 2MB
testdata_1000k.txt - size 20M
The files I generated matched the format you described. Each , delimited element is a random integer. The number in the file name describes the number of elements on each side if the :. For example, testdata_1k.txt has 1,000 elements on the left and 1,000 elements on the right.
Test Code
Here's the code I used to test each approach. Please note, these are not examples of production quality code.
Scanner Code
public Map<String, Boolean> scanner(InputStream stream) {
final Scanner in = new Scanner(new BufferedInputStream(stream));
final HashMap<String, Boolean> result = new HashMap<String, Boolean>();
final HashSet<String> left = new HashSet<String>();
in.useDelimiter(",");
boolean leftSide = true;
while (in.hasNext()) {
String token = in.next();
if (leftSide) {
int delim = token.indexOf(':');
if (delim >= 0) {
left.add(token.substring(0, delim));
String rightToken = token.substring(delim + 1, token.length());
result.put(rightToken, left.contains(rightToken));
leftSide = false;
} else {
left.add(token);
}
} else {
result.put(token, left.contains(token));
}
}
return result;
}
StringTokenizer Code
public Map<String, Boolean> stringTokenizer(InputStream stream) throws IOException {
final BufferedReader in = new BufferedReader(new InputStreamReader(stream));
final HashMap<String, Boolean> result = new HashMap<String, Boolean>();
final StringTokenizer lineTokens = new StringTokenizer(in.readLine(), ":");
final HashSet<String> left = new HashSet<String>();
if (lineTokens.hasMoreTokens()) {
final StringTokenizer leftTokens = new StringTokenizer(lineTokens.nextToken(), ",");
while (leftTokens.hasMoreTokens()) {
left.add(leftTokens.nextToken());
}
}
if (lineTokens.hasMoreTokens()) {
final StringTokenizer rightTokens = new StringTokenizer(lineTokens.nextToken(), ",");
while (rightTokens.hasMoreTokens()) {
String token = rightTokens.nextToken();
result.put(token, left.contains(token));
}
}
return result;
}
String.split Code
public Map<String, Boolean> split(InputStream stream) throws IOException {
final BufferedReader in = new BufferedReader(new InputStreamReader(stream));
final HashMap<String, Boolean> result = new HashMap<String, Boolean>();
final String[] splitLine = in.readLine().split(":");
final HashSet<String> left = new HashSet<String>(Arrays.asList(splitLine[0].split(",")));
for (String element : splitLine[1].split(",")) {
result.put(element, left.contains(element));
}
return result;
}
Timing
I ran each approach 6 times against each file. I threw the first sample out. The following represents the average of the remaining 5 samples.
Scanner
testdata_1k.txt - 23.2948 millis
testdata_10k.txt - 39.5036 millis
testdata_100k.txt - 240.5626 millis
testdata_1000k.txt - 2671.5132 millis
StringTokenizer
testdata_1k.txt - 31.2344 millis
testdata_10k.txt -14.7926 millis
testdata_100k.txt - 102.6412 millis
testdata_1000k.txt - 1353.073 millis
String.split
testdata_1k.txt - 8.9596 millis
testdata_10k.txt - 7.8396 millis
testdata_100k.txt - 63.4854 millis
testdata_1000k.txt - 947.8384 millis
Conclusion
Assuming your data fits in JVM heap, it’s hard to beat the parsing speed of String.split compared to StringTokenizer and Scanner.
Related
I'm trying to count the occurrences per line from a text file containing a large amount of codes (numbers).
Example of text file content:
9045,9107,2376,9017
2387,4405,4499,7120
9107,2376,3559,3488
9045,4405,3559,4499
I want to compare a similar set of numbers that I get from a text field, for example:
9107,4405,2387,4499
The only result I'm looking for, is if it contains more than 2 numbers (per line) from the text file. So in this case it will be true, because:
9045,9107,2376,9017 - false (1)
2387,4405,4499,7120 - true (3)
9107,2387,3559,3488 - false (2)
9045,4425,3559,4490 - false (0)
From what I understand, the best way to do this, is by using a 2d-array, and I've managed to get the file imported successfully:
Scanner in = null;
try {
in = new Scanner(new File("areas.txt"));
} catch (FileNotFoundException ex) {
Logger.getLogger(NewJFrame.class.getName()).log(Level.SEVERE, null, ex);
}
List < String[] > lines = new ArrayList < > ();
while ( in .hasNextLine()) {
String line = in .nextLine().trim();
String[] splitted = line.split(", ");
lines.add(splitted);
}
String[][] result = new String[lines.size()][];
for (int i = 0; i < result.length; i++) {
result[i] = lines.get(i);
}
System.out.println(Arrays.deepToString(result));
The result I get:
[[9045,9107,2376,9017], [2387,4405,4499,7120], [9107,2376,3559,3488], [9045,4405,3559,4499], [], []]
From here I'm a bit stuck on checking the codes individually per line. Any suggestions or advice? Is the 2d-array the best way of doing this, or is there maybe an easier or better way of doing it?
The expected number of inputs defines the type of searching algorithm you should use.
If you aren't searching through thousands of lines then a simple algorithm will do just fine. When in doubt favour simplicity over complex and hard to understand algorithms.
While it is not an efficient algorithm, in most cases a simple nested for-loop will do the trick.
A simple implementation would look like this:
final int FOUND_THRESHOLD = 2;
String[] comparedCodes = {"9107", "4405", "2387", "4499"};
String[][] allInputs = {
{"9045", "9107", "2376", "9017"}, // This should not match
{"2387", "4405", "4499", "7120"}, // This should match
{"9107", "2376", "3559", "3488"}, // This should not match
{"9045", "4405", "3559", "4499"}, // This should match
};
List<String[] > results = new ArrayList<>();
for (String[] input: allInputs) {
int numFound = 0;
// Compare the codes
for (String code: input) {
for (String c: comparedCodes) {
if (code.equals(c)) {
numFound++;
break; // Breaking out here prevents unnecessary work
}
}
if (numFound >= FOUND_THRESHOLD) {
results.add(input);
break; // Breaking out here prevents unnecessary work
}
}
}
for (String[] result: results) {
System.out.println(Arrays.toString(result));
}
which provides us with the output:
[2387, 4405, 4499, 7120]
[9045, 4405, 3559, 4499]
To expand on my comment, here's a rough outline of what you could do:
String textFieldContents = ... //get it
//build a set of the user input by splitting at commas
//a stream is used to be able to trim the elements before collecting them into a set
Set<String> userInput = Arrays.stream(textFieldContents .split(","))
.map(String::trim).collect(Collectors.toSet());
//stream the lines in the file
List<Boolean> matchResults = Files.lines(Path.of("areas.txt"))
//map each line to true/false
.map(line -> {
//split the line and stream the parts
return Arrays.stream(line.split(","))
//trim each part
.map(String::trim)
//select only those contained in the user input set
.filter(part -> userInput.contains(part))
//count matching elements and return whether there are more than 2 or not
.count() > 2l;
})
//collect the results into a list, each element position should correspond to the zero-based line number
.collect(Collectors.toList());
If you need to collect the matching lines instead of a flag per line you could replace map() with filter() (same content) and change the result type to List<String>.
Say you have a text file with "abcdefghijklmnop" and you have to add 3 characters at a time to an array list of type string. So the first cell of the array list would have "abc", the second would have "def" and so on until all the characters are inputted.
public ArrayList<String> returnArray()throws FileNotFoundException
{
int i = 0
private ArrayList<String> list = new ArrayList<String>();
Scanner scanCharacters = new Scanner(file);
while (scanCharacters.hasNext())
{
list.add(scanCharacters.next().substring(i,i+3);
i+= 3;
}
scanCharacters.close();
return characters;
}
Please use the below code,
ArrayList<String> list = new ArrayList<String>();
int i = 0;
int x = 0;
Scanner scanCharacters = new Scanner(file);
scanCharacters.useDelimiter(System.getProperty("line.separator"));
String finalString = "";
while (scanCharacters.hasNext()) {
String[] tokens = scanCharacters.next().split("\t");
for (String str : tokens) {
finalString = StringUtils.deleteWhitespace(str);
for (i = 0; i < finalString.length(); i = i + 3) {
x = i + 3;
if (x < finalString.length()) {
list.add(finalString.substring(i, i + 3));
} else {
list.add(finalString.substring(i, finalString.length()));
}
}
}
}
System.out.println("list" + list);
Here i have used StringUtils.deleteWhitespace(str) of Apache String Utils to delete the blank space from the file tokens.and the if condition inside for loop to check the substring for three char is available in the string if its not then whatever character are left it will go to the list.My text file contains the below strings
asdfcshgfser ajsnsdxs in first line and in second line
sasdsd fghfdgfd
after executing the program result are as,
list[asd, fcs, hgf, ser, ajs, nsd, xs, sas, dsd, fgh, fdg, fd]
public ArrayList<String> returnArray()throws FileNotFoundException
{
private ArrayList<String> list = new ArrayList<String>();
Scanner scanCharacters = new Scanner(file);
String temp = "";
while (scanCharacters.hasNext())
{
temp+=scanCharacters.next();
}
while(temp.length() > 2){
list.add(temp.substring(0,3));
temp = temp.substring(3);
}
if(temp.length()>0){
list.add(temp);
}
scanCharacters.close();
return list;
}
In this example I read in all of the data from the file, and then parse it in groups of three. Scanner can never backtrack so using next will leave out some of the data the way you're using it. You are going to get groups of words (which are separated by spaces, Java's default delimiter) and then sub-stringing the first 3 letters off.
IE:
ALEXCY WOWZAMAN
Would give you:
ALE and WOW
The way my example works is it gets all of the letters in one string and continuously sub strings off letters of three until there are no more, and finally, it adds the remainders. Like the others have said, it would be good to read up on a different data parser such as BufferedReader. In addition, I suggest you research substrings and Scanner if you want to continue to use your current method.
I am working on a interview problem on removing duplicate characters from a string.
The naive solution actually is more difficult to implement, that is using two for-loops to check each index with a current index.
I tried this problems a couple times, with the first attempt only working on sorted strings i.e. aabbcceedfg that is O(n).
I then realized I could use a HashSet. This solution's time complexuty is O(n) as well, but uses two Java library classes such as StringBuffer and HashSet, making its space complexity not that great.
public static String duplicate(String s) {
HashSet<Character> dup = new HashSet<Character>();
StringBuffer string = new StringBuffer();
for(int i = 0; i < s.length() - 1; i++) {
if(!dup.contains(s.charAt(i))){
dup.add(s.charAt(i));
string.append(s.charAt(i));
}
}
return string.toString();
}
I was wondering - is this solution optimal and valid for a technical interview? If it's not the most optimal, what is the better method?
I did Google a lot for the most optimal solution to this problem, however, most solutions used too many Java-specific libraries that are totally not valid in an interview context.
You can't improve on the complexity but you can optimize the code while keeping the same complexity.
Use a BitSet instead of a HashSet (or even just a boolean[]) - there are only 65536 different characters, which fits in 8Kb. Each bit means "whether you have seen the character before".
Set the StringBuffer to a specified size - a very minor improvement
Bugfix: your for-loop ended at i < s.length() - 1 but it should end at i < s.length(), else it will ignore the last character of the string.
-
public static String duplicate(String s) {
BitSet bits = new BitSet();
StringBuffer string = new StringBuffer(s.length());
for (int i = 0; i < s.length(); i++) {
if (!bits.get(s.charAt(i))) {
bits.set(s.charAt(i));
string.append(s.charAt(i));
}
}
return string.toString();
}
When using sets/maps, don't forget that almost all methods return values. For example, Set.add returns whether it was actually added. Set.remove returns whether it was actually removed. Map.put and Map.remove return the previous value. Using this you don't need to query the set twice, just change to if(dup.add(s.charAt(i))) ....
The second improvement from the performance point of view could be to dump the String into char[] array and process it manually without any StringBuffer/StringBuilder:
public static String duplicate(String s) {
HashSet<Character> dup = new HashSet<Character>();
char[] chars = s.toCharArray();
int i=0;
for(char ch : chars) {
if(dup.add(ch))
chars[i++] = ch;
}
return new String(chars, 0, i);
}
Note that we are writing result in the same array which we are iterating. This works as resulting position never exceeds iterating position.
Of course using BitSet as suggested by #ErwinBolwidt would be even more performant in this case:
public static String duplicate(String s) {
BitSet dup = new BitSet();
char[] chars = s.toCharArray();
int i=0;
for(char ch : chars) {
if(!dup.get(ch)) {
dup.set(ch, true);
chars[i++] = ch;
}
}
return new String(chars, 0, i);
}
Finally just for completeness there's Java-8 Stream API solution which is slower, but probably more expressive:
public static String duplicateStream(String s) {
return s.codePoints().distinct()
.collect(StringBuilder::new, StringBuilder::appendCodePoint,
StringBuilder::append).toString();
}
Note that processing code points is better than processing chars as your method will work fine even for Unicode surrogate pairs.
If it's a really long string your algorithm will spend most of it's time just throwing away characters.
Another approach that could be faster with long strings (like book-long) is to simple go through the alphabet, looking for the first occurrence of each character and store the index at which is found. Once all characters have been found create the new string based on where it was found.
package se.wederbrand.stackoverflow.alphabet;
import java.util.HashMap;
import java.util.Map;
public class Finder {
public static void main(String[] args) {
String target = "some really long string"; // like millions of characters
HashMap<Integer, Character> found = new HashMap<Integer, Character>(25);
for (Character c = 'a'; c <= 'z'; c++) {
int foundAt = target.indexOf(c);
if (foundAt != -1) {
found.put(foundAt, c);
}
}
StringBuffer result = new StringBuffer();
for (Map.Entry<Integer, Character> entry : found.entrySet()) {
result.append(entry.getValue());
}
System.out.println(result.toString());
}
}
Note that on strings where at least one character is missing this will be slow.
I'm trying to write a program that takes in a text file as input, adds words in it as keys and the associated to the words values schould be page numbers they are located in. Text looks like this:
Page1
blah bla bl
Page2
some blah
So for word "blah" output must be
blah : [1,2].
I only inserted the keys, but I can't figure out how to insert associated values to them. Here's what I have so far:
BufferedReader reader = new BufferedReader(input);
try {
Map <String, List<Integer>> library
= new TreeMap<String, List<Integer>>();
String line = reader.readLine();
while (line != null) {
String[] tokens = line.trim().split("\\s+");
for (int i = 0; i < tokens.length; i++) {
String word = tokens[i];
if (!library.containsKey(word)
&& !word.startsWith("Page")) {
library.put(word, new LinkedList<Integer>());
if (tokens[0].startsWith("Page")
&& library.containsKey(word)) {
List<Integer> pages = library.get(word);
int page = getNum(tokens[0]);
pages.add(page);
page++;
}
}
}
}
line = reader.readLine();
}
}
To get number of page I use this method
private static int getNum(String s) {
int result = 0;
int p = 1;
int i = s.length() - 1;
while (i >= 0) {
int d = s.charAt(i) - '0';
if (d >= 0 && d <= 9) {
result += d * p;
} else {
break;
}
i--;
p *= 10;
}
return result;
}
Thank's for all Your ideas!
The pages variable is declared inside the scope of your inner if statement. Once that block ends the variable is out of scope and undefined. If you want to use the list of pages later then it needs to be declared as a class variable.
I assume you are using pages to later generate a table of contents. But it's not strictly necessary as you can generate it later from your word index - I'll demonstrate how to do that below.
You also need to declare a currentPage variable which hold the latest 'PageN' text you have seen. There's no need to increment this manually: you should just store the number in the text (which copes with blank pages).
Page numbers seem to always be on their own line so page detection should be on the line text not on the word (which copes with situations where a line reads 'for more information see Page72').
It's also worth checking that there's a valid page number before your first word.
So putting that all together your code should be structured something like the following:
Map<String, Set<Integer>> index = new TreeMap<>();
int currentPage = -1;
String currentLine;
while ((currentLine = reader.readLine()) != null) {
if (isPage(currentLine)) {
currentPage = getPageNum(currentLine);
} else {
assert currentPage > 0;
for (String word: words(currentLine)) {
if (!index.contains(word))
index.put(word, new TreeSet<>());
index.get(word).add(currentPage);
}
}
}
I've separated methods words, isPage and getPageNum but you seem to have working code for all of those.
I've also changed the List of pages to a Set to reflect the fact that you only want a word-page reference once in the index.
To get an ordered list of all pages from the index use:
index.values().stream()
.flatMap(List::stream).distinct().sorted()
.collect(Collectors.toList());
That's assuming Java8 but it's not too hard to convert if you don't have streams.
If you are going to generate a reverse index (pages to words) then for efficiency reasons you should probably create the reverse map (Map<Integer, List<String>>) as you are processing the words.
You should try something like this. I'm not totally sure how you're using the pages, but this code will check if library contains the word (like you already have) and then if it doesn't it will add the page number to the list for that word.
if (!library.containsKey(word) && !word.startsWith("Page")) {
library.put(word, new LinkedList<Integer>());
}
else {
library.put(word, library.get(word).add(page));
}
Your problem seems to be in this piece of logic:
if (tokens[0].startsWith("Page")
&& library.containsKey(word)) {
clearly you are adding page numbers only when line starts with Page otherwise the logic inside if condition is not executed so you never updated the page number for any words.
I am having trouble figuring out why I am getting a mismatch exception, everything was working fine until I opened it on my computer. I have to take the census file and read it in, which I did, I found the growth rate, but the only problem seems to be this mismatch exception and I dont understand why there is a mismatch error, please help.
Here is the file example:
Alabama,4447100
Alaska,626932
Arizona,5130632
Arkansas,2673400
Here is the program:
public static void main(String[] args) throws IOException {
File f = new File("census2010.txt");
File ff = new File("census2000.txt");
if(!f.exists()) {
System.out.println( "f does not exist ");
}
if(!ff.exists()) {
System.out.println( "ff does not exist ");
}
Scanner infile2010 = new Scanner(f);
Scanner infile2000 = new Scanner(ff);
infile2010.useDelimiter("[\t|,|\n|\r]+");
infile2000.useDelimiter("[\t|,|\n|\r]+");
final int MAX = 60;
int[] pop10 = new int[MAX];
int[] pop00 = new int[MAX];
String[] statearray10 = new String[MAX];
String[] statearray00 = new String[MAX];
double [] growtharray = new double[MAX];
int fillsize;
fillsize = fillArrayPop (pop10, statearray10, MAX, infile2010, prw);
fillsize = fillArrayPop2 (pop00, statearray00, MAX, infile2000, prw); // the mismatch exception occurs here
growthRate(growtharray, pop10, pop00, fillsize);
sortarray(growtharray, statearray10, fillsize);
printarray (growtharray, statearray10, fillsize, prw);
}
public static int fillArrayPop (int[] num, String[] statearray10, int mAX, Scanner infile2010, PrintWriter prw) throws FileNotFoundException{
int retcnt = 0;
int pop;
String astate;
while(infile2010.hasNext()){
astate = infile2010.next();
pop = infile2010.nextInt();
statearray10[retcnt] = astate;
num[retcnt] = pop;
retcnt++;
}
return (retcnt);
}
public static int fillArrayPop2 (int[] number, String[] statearray00, int mAX, Scanner infile2000, PrintWriter prw) throws FileNotFoundException{
int retcounts = 0;
int pop;
String state;
while(infile2000.hasNext()){
state = infile2000.next();
pop = infile2000.nextInt(); // the mismatch exception occurs here
statearray00[retcounts] = state;
number[retcounts] = pop;
retcounts++;
}
return (retcounts);
}
public static void printarray (double[] growth, String[] state, int fillsize, PrintWriter prw){
DecimalFormat form = new DecimalFormat("0.000");
for (int counts = 0; counts < fillsize ; counts++){
System.out.println("For the position ["+counts+"] the growth rate for " + state[counts] + " is " +form.format(growth[counts]));
prw.println("For the position ["+counts+"] the growth rate for " + state[counts] + " is " + form.format(growth[counts]));
}
prw.close();
return;
}
public static void growthRate (double[] percent, int[] pop10, int[] pop00, int fillsize){
double growthrate = 0.0;
for(int count = 0; count < fillsize; count ++){
percent[count] = (double)(pop10[count]-pop00[count])/ pop00[count];
}
}
public static void sortarray(double[] percent, String[] statearray10, int fillsize) {
for (int fill = 0; fill < fillsize - 1; fill = fill + 1) {
for (int compare = fill + 1; compare < fillsize; compare++) {
if (percent[compare] < percent[fill]) {
double poptemp = percent[fill];
percent[fill] = percent[compare];
percent[compare] = poptemp;
String statetemp = statearray10[fill];
statearray10[fill] = statearray10[compare];
statearray10[compare] = statetemp;
}
}
}
}
Why am I getting this error?
As #Pescis' Answer says, the exception is saying that the token you are trying to get using nextInt is not a valid representation of an int. Either it contains unexpected characters, or it is out of range.
If you showed us the exception's message String, and the complete stacktrace, it would be easier to diagnose this. Without that, we have to resort to guessing.
Guess #1: there is something bad in the actual input data file that you are reading at that point.
Guess #2: the way you are setting the delimiters regex is wrong. Your regex seems to be mixing a character class and alternation in a way that is almost certainly incorrect. It should be this:
useDelimiter("[\\t,\\n\\r]+");
(It is not clear that fixing this will fix the problem ... but your regex is wrong none-the-less.)
Comments:
You have two methods fillArrayPop and fillArrayPop2 that seem to be doing the same thing. As far as I can tell the only differences are different variable names and other non-consequential stuff.
Your approach to "parsing" is ignoring the fact that the files are actually line structured. I think you should (conceptually) read lines and then split the lines into fields. What you are currently doing is treating them as a sequence of alternating fields.
Ideally, your "parsing" should be more robust. It should at least attempt to diagnose a mismatch between the expected file format and what you actually read from the file. (If I was writing this, I would at a minimum try to output the line or line number at which the "badness" was detected.)
From the javadoc of Scanner.nextInt():
InputMismatchException - if the next token does not match the Integer
regular expression, or is out of range
This means, either the next token you are trying to read when it throws the error can not be parsed as an Integer. This can happen for instance if there are non-numeric letters in the token or if the integer is out of range, i.e smaller than -2,147,483,648 or bigger than 2,147,483,647.
If no token in your file is out of range, it's most likely because the file's layout is not how you are programming it to be.
If you take out your prw in your print array your program should run without error. DELETE prw.println("For the position ["+counts+"] the growth rate for " + state[counts] + " is " + form.format(growth[counts])); and DELETE prw.close() and it should run.
If the file is formatted exactly as you have described above, then the .next() method call immediately above the error is actually pulling the whole line (i.e. the variable "state" would contain "Alabama,4447100").
Since there is no whitespace/delimiter between the state name and population, .next() reads the whole line, and the next call to .nextInt() attempts to read the next line (a string), generating an error.
I would recommend reading each line individually, and then splitting up the two pieces of data by the comma that separates them. Also, why do you pass a method a PrintWriter object, if the object isn't used in the method?