NullPointerException using ImageIO.read - java

I'm getting an NPE while trying to read in an image file, and I can't for the life of me figure out why. Here is my line:
BufferedImage source = ImageIO.read(new File(imgPath));
imgPath is basically guaranteed to be valid and right before it gets here it copies the file from the server. When it hits that line, I get this stack trace:
Exception in thread "Thread-26" java.lang.NullPointerException
at com.ctreber.aclib.image.ico.ICOReader.getICOEntry(ICOReader.java:120)
at com.ctreber.aclib.image.ico.ICOReader.read(ICOReader.java:89)
at javax.imageio.ImageIO.read(ImageIO.java:1400)
at javax.imageio.ImageIO.read(ImageIO.java:1286)
at PrintServer.resizeImage(PrintServer.java:981) <---My function
<Stack of rest of my application here>
Also, this is thrown into my output window:
Can't create ICOFile: Can't read bytes: 2
I have no idea what is going on, especially since the File constructor is succeeding. I can't seem to find anybody who has had a similar problem. Anybody have any ideas? (Java 5 if that makes any difference)

I poked around some more and found that you can specify which ImageReader ImageIO will use and read it in that way. I poked around our codebase and found that we already had a function in place for doing EXACTLY what I was trying to accomplish here. Just for anybody else who runs into a similar issue, here is the crux of the code (some of the crap is defined above, but this should help anybody who tries to do it):
File imageFile = new File(filename);
Iterator<ImageReader> imageReaders = ImageIO.getImageReadersByFormatName("jpeg");
if ( imageReaders.hasNext() ) {
imageReader = (ImageReader)imageReaders.next();
stream = ImageIO.createImageInputStream(imageFile);
imageReader.setInput(stream, true);
ImageReadParam param = imageReader.getDefaultReadParam();
curImage = imageReader.read(0, param);
}
Thanks for the suggestions and help all.

The File constructor will almost certainly succeed, regardless of whether it points to a valid/existing file. At the very least, I'd check whether your underlying file exists via the exists() method.

Also note that ImageIO.read is not thread-safe (it reuses cached ImageReaders which are not thread-safe).
This means you can't easily read multiple files in parallel. To do that, you'll have to deal with ImageReaders yourself.

Have you considered that the file may simply be corrupted, or that ImageIO is trying to read it as the wrong type of file?

Googling for the ICOReader class results in one hit: IconsFactory from jide-common.
Apparently they had the same problem:
// Using ImageIO approach results in exception like this.
// Exception in thread "main" java.lang.NullPointerException
// at com.ctreber.aclib.image.ico.ICOReader.getICOEntry(ICOReader.java:120)
// at com.ctreber.aclib.image.ico.ICOReader.read(ICOReader.java:89)
// at javax.imageio.ImageIO.read(ImageIO.java:1400)
// at javax.imageio.ImageIO.read(ImageIO.java:1322)
// at com.jidesoft.icons.IconsFactory.b(Unknown Source)
// at com.jidesoft.icons.IconsFactory.a(Unknown Source)
// at com.jidesoft.icons.IconsFactory.getImageIcon(Unknown Source)
// at com.jidesoft.plaf.vsnet.VsnetMetalUtils.initComponentDefaults(Unknown Source)
// private static ImageIcon createImageIconWithException(final Class<?> baseClass, final String file) throws IOException {
// try {
// InputStream resource =
// baseClass.getResourceAsStream(file);
// if (resource == null) {
// throw new IOException("File " + file + " not found");
// }
// BufferedInputStream in =
// new BufferedInputStream(resource);
// return new ImageIcon(ImageIO.read(in));
// }
// catch (IOException ioe) {
// throw ioe;
// }
// }
What did they do instead?
private static ImageIcon createImageIconWithException(
final Class<?> baseClass, final String file)
throws IOException {
InputStream resource = baseClass.getResourceAsStream(file);
final byte[][] buffer = new byte[1][];
try {
if (resource == null) {
throw new IOException("File " + file + " not found");
}
BufferedInputStream in = new BufferedInputStream(resource);
ByteArrayOutputStream out = new ByteArrayOutputStream(1024);
buffer[0] = new byte[1024];
int n;
while ((n = in.read(buffer[0])) > 0) {
out.write(buffer[0], 0, n);
}
in.close();
out.flush();
buffer[0] = out.toByteArray();
} catch (IOException ioe) {
throw ioe;
}
if (buffer[0] == null) {
throw new IOException(baseClass.getName() + "/" + file
+ " not found.");
}
if (buffer[0].length == 0) {
throw new IOException("Warning: " + file
+ " is zero-length");
}
return new ImageIcon(Toolkit.getDefaultToolkit().createImage(
buffer[0]));
}
So you might want to try the same approach: read the raw bytes and use Toolkit to create an image from them.

"it's a jpeg but doesn't have a jpeg
extension."
That might be it.
It appears that the library AC.lib-ICO is throwing the NPE. Since this library is intended to read the Microsoft ICO file format, a JPEG might be a problem for it.
Consider explicitly providing the format using an alternative method.

Related

FileChannel.open(path, CREATE|CREATE_NEW) without WRITE option throws NoSuchFileException

I had the following code:
#Nonnull
#SneakyThrows
private Pair<InputStream, Long> probeSize(#Nonnull final InputStream image) {
final String tmpId = UUID.randomUUID().toString();
final File probeFile = new File(tmpDir, tmpId + ".jpg");
try (final FileChannel outChannel = FileChannel.open(probeFile.toPath(), CREATE);
final ReadableByteChannel innChannel = Channels.newChannel(image)) {
outChannel.transferFrom(innChannel, 0, Long.MAX_VALUE);
}
final Long fileSize = probeFile.length();
return Pair.of(new FileInputStream(probeFile), fileSize);
}
This code consistently threw the following exception:
Caused by: java.nio.file.NoSuchFileException: /tmp/4bbc9008-e91c-4f18-b0f2-c61eed35066e.jpg
at sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(Unknown Source)
looking at the javadoc of FileChannel.open(path, option) and the associated StandardOpenOption, there is no documentation that alludes to the fact that, to create a file, you must also open it for write.
The only options that work:
FileChannel.open(probeFile.toPath(), CREATE, WRITE)
FileChannel.open(probeFile.toPath(), CREATE_NEW, WRITE)
I only determined this by going through the UnixChannelFactory.newFileChannel and noticed the following:
UnixChannelFactory:
protected static FileDescriptor open(int dfd,
UnixPath path,
String pathForPermissionCheck,
Flags flags,
int mode)
throws UnixException
{
// map to oflags
int oflags;
if (flags.read && flags.write) {
oflags = O_RDWR;
} else {
oflags = (flags.write) ? O_WRONLY : O_RDONLY;
}
if (flags.write) {
if (flags.truncateExisting)
oflags |= O_TRUNC;
if (flags.append)
oflags |= O_APPEND;
// create flags
if (flags.createNew) {
byte[] pathForSysCall = path.asByteArray();
// throw exception if file name is "." to avoid confusing error
if ((pathForSysCall[pathForSysCall.length-1] == '.') &&
(pathForSysCall.length == 1 ||
(pathForSysCall[pathForSysCall.length-2] == '/')))
{
throw new UnixException(EEXIST);
}
oflags |= (O_CREAT | O_EXCL);
} else {
if (flags.create)
oflags |= O_CREAT;
}
}
Which shows that, unless you specify WRITE option, the file will never be created.
Is this a bug or an intended functionality, that FileChannel.open cannot create a file unless it is opened for write?
I'm looking at the JDK 7 Javadoc for FileChannel.open(...).
The doc for the method says:
The READ and WRITE options determine if the file should be opened for reading and/or writing. If neither option (or the APPEND option) is contained in the array then the file is opened for reading.
The doc for CREATE_NEW says:
This option is ignored when the file is opened only for reading.
The doc for CREATE says:
This option is ignored if the CREATE_NEW option is also present or the file is opened only for reading.
Putting these three snippets together, yes, this is expected behavior.

How to use OpenNLP parser models in an Android app?

I go through this link for java nlp https://www.tutorialspoint.com/opennlp/index.htm
I tried below code in android:
try {
File file = copyAssets();
// InputStream inputStream = new FileInputStream(file);
ParserModel model = new ParserModel(file);
// Creating a parser
Parser parser = ParserFactory.create(model);
// Parsing the sentence
String sentence = "Tutorialspoint is the largest tutorial library.";
Parse topParses[] = ParserTool.parseLine(sentence, parser,1);
for (Parse p : topParses) {
p.show();
}
} catch (Exception e) {
}
i download file **en-parser-chunking.bin** from internet and placed in assets of android project but code stop on third line i.e ParserModel model = new ParserModel(file); without giving any exception. Need to know how can this work in android? if its not working is there any other support for nlp in android without consuming any services?
The reason the code stalls/breaks at runtime is that you need to use an InputStream instead of a File to load the binary file resource. Most likely, the File instance is null when you "load" it the way as indicated in line 2. In theory, this constructor of ParserModelshould detect this and an IOException should be thrown. Yet, sadly, the JavaDoc of OpenNLP is not precise about this kind of situation and you are not handling this exception properly in the catch block.
Moreover, the code snippet you presented should be improved, so that you know what actually went wrong.
Therefore, loading a POSModel from within an Activity should be done differently. Here is a variant that takes care for both aspects:
AssetManager assetManager = getAssets();
InputStream in = null;
try {
in = assetManager.open("en-parser-chunking.bin");
POSModel posModel;
if(in != null) {
posModel = new POSModel(in);
if(posModel!=null) {
// From here, <posModel> is initialized and you can start playing with it...
// Creating a parser
Parser parser = ParserFactory.create(model);
// Parsing the sentence
String sentence = "Tutorialspoint is the largest tutorial library.";
Parse topParses[] = ParserTool.parseLine(sentence, parser,1);
for (Parse p : topParses) {
p.show();
}
}
else {
// resource file not found - whatever you want to do in this case
Log.w("NLP", "ParserModel could not initialized.");
}
}
else {
// resource file not found - whatever you want to do in this case
Log.w("NLP", "OpenNLP binary model file could not found in assets.");
}
}
catch (Exception ex) {
Log.e("NLP", "message: " + ex.getMessage(), ex);
// proper exception handling here...
}
finally {
if(in!=null) {
in.close();
}
}
This way, you're using an InputStream approach and at the same time you take care for proper exception and resource handling. Moreover, you can now use a Debugger in case something remains unclear with the resource path references of your model files. For reference, see the official JavaDoc of AssetManager#open(String resourceName).
Note well:
Loading OpenNLP's binary resources can consume quite a lot of memory. For this reason, it might be the case that your Android App's request to allocate the needed memory for this operation can or will not be granted by the actual runtime (i.e., smartphone) environment.
Therefore, carefully monitor the amount of requested/required RAM while posModel = new POSModel(in); is invoked.
Hope it helps.

Batching multiple files to Amazon S3 using the Java SDK

I'm trying to upload multiple files to Amazon S3 all under the same key, by appending the files. I have a list of file names and want to upload/append the files in that order. I am pretty much exactly following this tutorial but I am looping through each file first and uploading that in part. Because the files are on hdfs (the Path is actually org.apache.hadoop.fs.Path), I am using the input stream to send the file data. Some pseudocode is below (I am commenting the blocks that are word for word from the tutorial):
// Create a list of UploadPartResponse objects. You get one of these for
// each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(
bk.getBucket(), bk.getKey());
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
try {
int i = 1; // part number
for (String file : files) {
Path filePath = new Path(file);
// Get the input stream and content length
long contentLength = fss.get(branch).getFileStatus(filePath).getLen();
InputStream is = fss.get(branch).open(filePath);
long filePosition = 0;
while (filePosition < contentLength) {
// create request
//upload part and add response to our list
i++;
}
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bk.getBucket(),
bk.getKey(),
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
//...
}
However, I am getting the following error:
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 2C1126E838F65BB9), S3 Extended Request ID: QmpybmrqepaNtTVxWRM1g2w/fYW+8DPrDwUEK1XeorNKtnUKbnJeVM6qmeNcrPwc
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1109)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:461)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:296)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3743)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2617)
If anyone knows what the cause of this error might be, that would be greatly appreciated. Alternatively, if there is a better way to concatenate a bunch of files into one s3 key, that would be great as well. I tried using java's builtin SequenceInputStream but that did not work. Any help would be greatly appreciated. For reference, the total size of all the files could be as large as 10-15 gb.
I know it's probably a bit late but worth giving my contribution.
I've managed to solve a similar problem using the SequenceInputStream.
The tricks is in being able to calculate the total size of the result file and then feeding the SequenceInputStream with an Enumeration<InputStream>.
Here's some example code that might help:
public void combineFiles() {
List<String> files = getFiles();
long totalFileSize = files.stream()
.map(this::getContentLength)
.reduce(0L, (f, s) -> f + s);
try {
try (InputStream partialFile = new SequenceInputStream(getInputStreamEnumeration(files))) {
ObjectMetadata resultFileMetadata = new ObjectMetadata();
resultFileMetadata.setContentLength(totalFileSize);
s3Client.putObject("bucketName", "resultFilePath", partialFile, resultFileMetadata);
}
} catch (IOException e) {
LOG.error("An error occurred while combining files. {}", e);
}
}
private Enumeration<? extends InputStream> getInputStreamEnumeration(List<String> files) {
return new Enumeration<InputStream>() {
private Iterator<String> fileNamesIterator = files.iterator();
#Override
public boolean hasMoreElements() {
return fileNamesIterator.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new FileInputStream(Paths.get(fileNamesIterator.next()).toFile());
} catch (FileNotFoundException e) {
System.err.println(e.getMessage());
throw new RuntimeException(e);
}
}
};
}
Hope this helps!

SeekableByteChannel.read() always returns 0, InputStream is fine

We have a data file for which we need to generate a CRC. (As a placeholder, I'm using CRC32 while the others figure out what CRC polynomial they actually want.) This code seems like it ought to work:
broken:
Path in = ......;
try (SeekableByteChannel reading =
Files.newByteChannel (in, StandardOpenOption.READ))
{
System.err.println("byte channel is a " + reading.getClass().getName() +
" from " + in + " of size " + reading.size() + " and isopen=" + reading.isOpen());
java.util.zip.CRC32 placeholder = new java.util.zip.CRC32();
ByteBuffer buffer = ByteBuffer.allocate (reasonable_buffer_size);
int bytesread = 0;
int loops = 0;
while ((bytesread = reading.read(buffer)) > 0) {
byte[] raw = buffer.array();
System.err.println("Claims to have read " + bytesread + " bytes, have buffer of size " + raw.length + ", updating CRC");
placeholder.update(raw);
loops++;
buffer.clear();
}
// do stuff with placeholder.getValue()
}
catch (all the things that go wrong with opening files) {
and handle them;
}
The System.err and loops stuff is just for debugging; we don't actually care how many times it takes. The output is:
byte channel is a sun.nio.ch.FileChannelImpl from C:\working\tmp\ls2kst83543216xuxxy8136.tmp of size 7196 and isopen=true
finished after 0 time(s) through the loop
There's no way to run the real code inside a debugger to step through it, but from looking at the source to sun.nio.ch.FileChannelImpl.read() it looks like a 0 is returned if the file magically becomes closed while internal data structures are prepared; the code below is copied from the Java 7 reference implementation, comments added by me:
// sun.nio.ch.FileChannelImpl.java
public int read(ByteBuffer dst) throws IOException {
ensureOpen(); // this throws if file is closed...
if (!readable)
throw new NonReadableChannelException();
synchronized (positionLock) {
int n = 0;
int ti = -1;
Object traceContext = IoTrace.fileReadBegin(path);
try {
begin();
ti = threads.add();
if (!isOpen())
return 0; // ...argh
do {
n = IOUtil.read(fd, dst, -1, nd);
} while (......)
.......
But the debugging code tests isOpen() and gets true. So I don't know what's going wrong.
As the current test data files are tiny, I dropped this in place just to have something working:
works for now:
try {
byte[] scratch = Files.readAllBytes(in);
java.util.zip.CRC32 placeholder = new java.util.zip.CRC32();
placeholder.update(scratch);
// do stuff with placeholder.getValue()
}
I don't want to slurp the entire file into memory for the Real Code, because some of those files can be large. I do note that readAllBytes uses an InputStream in its reference implementation, which has no trouble reading the same file that SeekableByteChannel failed to. So I'll probably rewrite the code to just use input streams instead of byte channels. I'd still like to figure out what's gone wrong in case a future scenario comes up where we need to use byte channels. What am I missing with SeekableByteChannel?
Check that 'reasonable_buffer_size' isn't zero.

xhtmlrenderer creating PDFs of length 0

I am new to org.xhtmlrenderer.pdf.ITextRenderer and have this problem:
The PDFs that my test servlet streams to my Downloads folder are in fact empty files.
The relevant method, streamAndDeleteTheClob, is shown below.
The first try block is definitely not a problem.
The server spends a lot of time in the second try block. No exception thrown.
Can anyone suggest a solution to this problem or a good approach to to debugging it?
Can anyone point me to essentially similar code that really works?
Any help would be much appreciated.
res.setContentType("application/pdf");
ServletOutputStream out = res.getOutputStream();
...
private boolean streamAndDeleteTheClob(int pageid,
Connection con,
ServletOutputStream out) throws IOException, ServletException {
Statement statement;
Clob htmlpage;
StringBuffer pdfbuf = new StringBuffer();
final String pageToSendQuery = "SELECT text FROM page WHERE pageid = " + pageid;
// create xhtml file as a CLOB (Oracle large character object) and stream it into StringBuffer pdfbuf
try { // definitely no problem in this block
statement = con.createStatement();
resultSet = statement.executeQuery(pageToSendQuery);
if (resultSet.next()) {
htmlpage = resultSet.getClob(1);
} else {
return true;
}
final Reader in = htmlpage.getCharacterStream();
final char[] buffer = new char[4096];
while ((in.read(buffer)) != -1) {
pdfbuf.append(buffer);
}
} catch (Exception ex) {
out.println("buffering CLOB failed: " + ex);
}
// create pdf from StringBuffer
try {
DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder();
Document doc = builder.parse(new InputSource(new StringReader(pdfbuf.toString())));
ITextRenderer renderer = new ITextRenderer();
renderer.setDocument(doc, null);
renderer.layout();
renderer.createPDF(out);
out.close();
} catch (Exception ex) {
out.println("streaming of pdf failed: " + ex);
}
deleteClob(con, pageid);
return false;
}
Using the DocumentBuilder.parse this way will try to resolve the DTD referenced in the XHTML page. It takes a really long time. The easyest way to aviod that if you are using the Flying Saucer (xhtmlrenderer), is to create the document this way:
Document myDocument = XMLResource.load(myInputStream).getDocument();
Note that you can use XMLResource.load with a Reader too.
Two things I can think of.
1) If the iText document is not closed, it'll be empty. Looks like renderer.finish() will work, but createPDF(out) should do that already.
2) If there are no pages, you could get an empty doc as well... so an empty input could result in a 0-byte PDF.
3) You might be getting a perfectly valid PDF that's not being streamed properly. Try writing to a ByteArrayOutputStream and checking the length there.
4) An almost fanatical dedication to the Pope!

Categories

Resources