Compare file extension to file header - java

I'm starting to design an application, that will, in part, run through a directory of files and compare their extensions to their file headers.
Does anyone have any advice as to the best way to approach this? I know I could simply have a lookup table that will contain the file's header signature. e.g., JPEG: \xFF\xD8\xFF\xE0
I was hoping there might be a simper way.
Thanks in advance for your help.

I'm afraid it'll have to be more complicated than that. Not every file type has a header at all, and some (such as RAR) have their characteristic data structures at the end rather than at the beginning.
You may want to take a look at the Unix file command, which does the same job:
http://linux.die.net/man/1/file
http://linux.die.net/man/5/magic

If you don't need to do dirty work on these values (and you don't have linux) you could simply use an external program, like TrID, that is able to do this thing for you.
Maybe you can just work on its output without caring to doing it by yourself.. in anycase if you have just around 20 kinds of files that you will have to manage having a simple lookup table (eg. HashMap<String,byte[]>) is not that bad. Of cours this will work only if desidered file format has a magic number, otherwise you are on your own (or with an external program).

Because of the problem with the missing significant header for some file types (thanks #Michael) I would create a map of extension to a kind of type checker with a simple API like
public interface TypeCheck throws IOException {
public boolean isValid(InputStream data);
}
Now you can code something like
File toBeTested = ...;
Map<String,TypeCheck> typeCheckByExtension = ...;
TypeCheck check = typeCheckByExtension.get(getExtension(toBeTested.getName()));
if (check != null) {
InputStream in = new FileInputStream(toBeTested);
if (check.isValid(in)) {
// process valid file
} else {
// process invalid file
}
in.close();
} else {
// process unknown file
}
The Header check for JPEG for example may look like
public class JpegTypeCheck implements TypeCheck {
private static final byte[] HEADER = new byte[] {0xFF, 0xD8, 0xFF, 0xE0};
public boolean isValid(InputStream data) throws IOException {
byte[] header = new byte[4];
return data.read(header) == 4 && Arrays.equals(header, HEADER);
}
}
For other types with no significant header you can implement completly other type checks.

You can extract the mime type for each file and compare this to a map of mimetype/extension (Map<String, List<String>>, the first String is the mime type, the second is a list of valid extensions).
Resources :
Get the Mime Type from a File
JMimeMagic
On the same topic :
Java - HowTo extract MimeType from a byte[]
Getting A File's Mime Type In Java

You can know the file type of file reading the header using apache tika. Following code need apache tika jar.
InputStream is = MainApp.class.getResourceAsStream("/NetFx20SP1_x64.txt");
BufferedInputStream bis = new BufferedInputStream(is);
AutoDetectParser parser = new AutoDetectParser();
Detector detector = parser.getDetector();
Metadata md = new Metadata();
md.add(Metadata.RESOURCE_NAME_KEY,MainApp.class.getResource("/NetFx20SP1_x64.txt").getPath());
MediaType mediaType = detector.detect(bis, md);
System.out.println("MIMe Type of File : " + mediaType.toString());

Related

Protocol Buffers: How to parse a .proto file in Java

I am trying to dynamically parse a given .proto file in Java to decode a Protobuf-encoded binary.
I have the following parsing method, in which the "proto" string contains the content of the .proto file:
public static Descriptors.FileDescriptor parseProto (String proto) throws InvalidProtocolBufferException, Descriptors.DescriptorValidationException {
DescriptorProtos.FileDescriptorProto descriptorProto = DescriptorProtos.FileDescriptorProto.parseFrom(proto.getBytes());
return Descriptors.FileDescriptor.buildFrom(descriptorProto, null);
}
Though, on execution the previous method throws an exception with the message "Protocol message tag had invalid wire type.". I use the example .proto file from Google so I guess it is valid: https://github.com/google/protobuf/blob/master/examples/addressbook.proto
Here is the stack trace:
15:43:24.707 [pool-1-thread-1] ERROR com.github.whiver.nifi.processor.ProtobufDecoderProcessor - ProtobufDecoderProcessor[id=42c8ab94-2d8a-491b-bd99-b4451d127ae0] Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
at com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:115)
at com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:551)
at com.google.protobuf.GeneratedMessageV3.parseUnknownField(GeneratedMessageV3.java:293)
at com.google.protobuf.DescriptorProtos$FileDescriptorSet.<init>(DescriptorProtos.java:88)
at com.google.protobuf.DescriptorProtos$FileDescriptorSet.<init>(DescriptorProtos.java:53)
at com.google.protobuf.DescriptorProtos$FileDescriptorSet$1.parsePartialFrom(DescriptorProtos.java:773)
at com.google.protobuf.DescriptorProtos$FileDescriptorSet$1.parsePartialFrom(DescriptorProtos.java:768)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:163)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:197)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:209)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:214)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at com.google.protobuf.DescriptorProtos$FileDescriptorSet.parseFrom(DescriptorProtos.java:260)
at com.github.whiver.nifi.parser.SchemaParser.parseProto(SchemaParser.java:9)
at com.github.whiver.nifi.processor.ProtobufDecoderProcessor.lambda$onTrigger$0(ProtobufDecoderProcessor.java:103)
at org.apache.nifi.util.MockProcessSession.write(MockProcessSession.java:895)
at org.apache.nifi.util.MockProcessSession.write(MockProcessSession.java:62)
at com.github.whiver.nifi.processor.ProtobufDecoderProcessor.onTrigger(ProtobufDecoderProcessor.java:100)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.util.StandardProcessorTestRunner$RunProcessor.call(StandardProcessorTestRunner.java:251)
at org.apache.nifi.util.StandardProcessorTestRunner$RunProcessor.call(StandardProcessorTestRunner.java:245)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Any idea?
Thank you!
It looks like you're trying to use FileDescriptorSet.parseFrom to populate a FileDescriptorSet. This will only work if the bytes you're providing are the binary protobuf contents - which is to say: a compiled schema. You can get a compiled schema by using the protoc command-line-tool with the --descriptor_set_out option. What you're actually passing it right now is the text bytes that make up the text schema, which is not what parseFrom expects.
Without a compiled schema, you would need a runtime .proto parser. I'm not aware of one for Java; protobuf-net includes one (protobuf-net.Reflection), but that is C#/.NET. Without an available runtime .proto parser, you'd need to shell-execute protoc instead.
Drawing from the other answers, here's a snippet of working Kotlin code from a library I'm developing.
https://github.com/asarkar/okgrpc
private fun lookupProtos(
protoPaths: List<String>,
protoFile: String,
tempDir: Path,
resolved: MutableSet<String>
): List<DescriptorProtos.FileDescriptorProto> {
val schema = generateSchema(protoPaths, protoFile, tempDir)
return schema.fileList
.filter { resolved.add(it.name) }
.flatMap { fd ->
fd.dependencyList
.filterNot(resolved::contains)
.flatMap { lookupProtos(protoPaths, it, tempDir, resolved) } + fd
}
}
private fun generateSchema(
protoPaths: List<String>,
protoFile: String,
tempDir: Path
): DescriptorProtos.FileDescriptorSet {
val outFile = Files.createTempFile(tempDir, null, null)
val stderr = ByteArrayOutputStream()
val exitCode = Protoc.runProtoc(
(protoPaths.map { "--proto_path=$it" } + listOf("--descriptor_set_out=$outFile", protoFile)).toTypedArray(),
DevNull,
stderr
)
if (exitCode != 0) {
throw IllegalStateException("Failed to generate schema for: $protoFile")
}
return Files.newInputStream(outFile).use { DescriptorProtos.FileDescriptorSet.parseFrom(it) }
}
The idea is to use os72/protoc-jar to write out a compiled schema/file descriptor. Then use FileDescriptorSet.parseFrom to read that file, and recurse on its dependencies.
Don't use java String to hold the protobuf payload. The issue is that String does translations behind the scenes, and makes assumptions about character sets.
Protobuf works on byte arrays, and the exact representation in the array has to be unchanged. Going to and from String does not work.

How to set HTTP header in Apache JClouds?

I'm using Apache JClouds to connect to my Openstack Swift installation. I managed to upload and download objects from Swift. However, I failed to see how to upload dynamic large object to Swift.
To upload dynamic large object, I need to upload all segments first, which I can do as usual. Then I need to upload a manifest object to combine them logically. The problem is to tell Swift this is a manifest object, I need to set a special header, which I don't know how to do that using JClouds api.
Here's a dynamic large object example from openstack official website.
The code I'm using:
public static void main(String[] args) throws IOException {
BlobStore blobStore = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildView(BlobStoreContext.class).getBlobStore();
blobStore.createContainerInLocation(null, "container");
ByteSource segment1 = ByteSource.wrap("foo".getBytes(Charsets.UTF_8));
Blob seg1Blob = blobStore.blobBuilder("/foo/bar/1").payload(segment1).contentLength(segment1.size()).build();
System.out.println(blobStore.putBlob("container", seg1Blob));
ByteSource segment2 = ByteSource.wrap("bar".getBytes(Charsets.UTF_8));
Blob seg2Blob = blobStore.blobBuilder("/foo/bar/2").payload(segment2).contentLength(segment2.size()).build();
System.out.println(blobStore.putBlob("container", seg2Blob));
ByteSource manifest = ByteSource.wrap("".getBytes(Charsets.UTF_8));
// TODO: set manifest header here
Blob manifestBlob = blobStore.blobBuilder("/foo/bar").payload(manifest).contentLength(manifest.size()).build();
System.out.println(blobStore.putBlob("container", manifestBlob));
Blob dloBlob = blobStore.getBlob("container", "/foo/bar");
InputStream input = dloBlob.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i); // should print "foobar"
}
}
The "TODO" part is my problem.
Edited:
I've been pointed out that Jclouds handles large file upload automatically, which is not so useful in our case. In fact, we do not know how large the file will be or when the next segment will arrive at the time we start to upload the first segment. Our api is designed to make client able to upload their files in chunks of their own chosen size and at their own chosen time, and when done, call a 'commit' to make these chunks as a file. So this makes us want to upload the manifest on our own here.
According to #Everett Toews's answer, I've got my code correctly running:
public static void main(String[] args) throws IOException {
CommonSwiftClient swift = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildApi(CommonSwiftClient.class);
SwiftObject segment1 = swift.newSwiftObject();
segment1.getInfo().setName("foo/bar/1");
segment1.setPayload("foo");
swift.putObject("container", segment1);
SwiftObject segment2 = swift.newSwiftObject();
segment2.getInfo().setName("foo/bar/2");
segment2.setPayload("bar");
swift.putObject("container", segment2);
swift.putObjectManifest("container", "foo/bar2");
SwiftObject dlo = swift.getObject("container", "foo/bar", GetOptions.NONE);
InputStream input = dlo.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i);
}
}
jclouds handles writing the manifest for you. Here are a couple of examples that might help you, UploadLargeObject and largeblob.MainApp.
Try using
Map<String, String> manifestMetadata = ImmutableMap.of(
"X-Object-Manifest", "<container>/<prefix>");
BlobBuilder.userMetadata(manifestMetadata)
If that doesn't work you might have to use the CommonSwiftClient like in CrossOriginResourceSharingContainer.java.

Java: CSV File Easy Read/Write

I'm working on a program that requires quick access to a CSV comma-delimited spreadsheet file.
So far I've been able to read from it easily using a BufferedReader.
However, now I want to be able to edit the data it reads, then export it BACK to the CSV.
The spreadsheet contains names, phone numbers, email addresses, etc. And the program lists everyone's data, and when you click on them it brings up a page with more detailed information, also pulled from the CSV. On that page you can edit the data, and I want to be able to click a "Save Changes" button, then export the data back to its appropriate line in the CSV--or delete the old one, and append the new.
I'm not very familiar with using a BufferedWriter, or whatever it is I should be using.
What I started to do is create a custom class called FileIO. It contains both a BufferedReader and a BufferedWriter. So far it has a method that returns bufferedReader.readLine(), called read(). Now I want a function called write(String line).
public static class FileIO {
BufferedReader read;
BufferedWriter write;
public FileIO (String file) throws MalformedURLException, IOException {
read = new BufferedReader(new InputStreamReader (getUrl(file).openStream()));
write = new BufferedWriter (new FileWriter (file));
}
public static URL getUrl (String file) throws IOException {
return //new URL (fileServer + file).openStream()));
FileIO.class.getResource(file);
}
public String read () throws IOException {
return read.readLine();
}
public void write (String line) {
String [] data = line.split("\\|");
String firstName = data[0];
// int lineNum = findLineThatStartsWith(firstName);
// write.writeLine(lineNum, line);
}
};
I'm hoping somebody has an idea as to how I can do this?
Rather than reinventing the wheel you could have a look at OpenCSV which supports reading and writing of CSV files. Here are examples of reading & writing
Please consider Apache commons csv.
To fast understand the api, there are four important classes:
CSVFormat
Specifies the format of a CSV file and parses input.
CSVParser
Parses CSV files according to the specified format.
CSVPrinter
Prints values in a CSV format.
CSVRecord
A CSV record parsed from a CSV file.
Code Example:
Unit test code:
The spreadsheet contains names, phone numbers, email addresses, etc. And the program lists everyone's data, and when you click on them it brings up a page with more detailed information, also pulled from the CSV. On that page you can edit the data, and I want to be able to click a "Save Changes" button, then export the data back to its appropriate line in the CSV--or delete the old one, and append the new.
The content of a file is a sequence of bytes. CSV is a text based file format, i.e. the sequence of byte is interpreted as a sequence of characters, where newlines are delimited by special newline characters.
Consequently, if the length of a line increases, the characters of all following lines need to be moved to make room for the new characters. Likewise, to delete a line you must move the later characters to fill the gap. That is, you can not update a line in a csv (at least not when changing its length) without rewriting all following lines in the file. For simplicity, I'd rewrite the entire file.
Since you already have code to write and read the CSV file, adapting it should be straightforward. But before you do that, it might be worth asking yourself if you're using the right tool for the job. If the goal is to keep a list of records, and edit individual records in a form, programs such as Microsoft Access or whatever the Open Office equivalent is called might be a more natural fit. If you UI needs go beyond what these programs provide, using a relational database to keep your data is probably a better fit (more efficient and flexible than a CSV).
Add Dependencies
implementation 'com.opencsv:opencsv:4.6'
Add Below Code in onCreate()
InputStreamReader is = null;
try {
String path= "storage/emulated/0/Android/media/in.bioenabletech.imageProcessing/MLkit/countries_image_crop.csv";
CSVReader reader = new CSVReader(new FileReader(path));
String[] nextLine;
int lineNumber = 0;
while ((nextLine = reader.readNext()) != null) {
lineNumber++;
//print CSV file according to your column 1 means first column, 2 means
second column
Log.e(TAG, "onCreate: "+nextLine[2] );
}
}
catch (Exception e)
{
Log.e(TAG, "onCreate: "+e );
}
I solved it using
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-csv</artifactId>
<version>2.8.6</version>
</dependency>
and
private static final CsvMapper mapper = new CsvMapper();
public static <T> List<T> readCsvFile(MultipartFile file, Class<T> clazz) throws IOException {
InputStream inputStream = file.getInputStream();
CsvSchema schema = mapper.schemaFor(clazz).withHeader().withColumnReordering(true);
ObjectReader reader = mapper.readerFor(clazz).with(schema);
return reader.<T>readValues(inputStream).readAll();
}

How do i verify string content using Mockito in Java

I am new to using Mockito test framework. I need to unit test one method which return the the string content. Also the same contents will be stored in one .js file (i.e. "8.js").
How do I verify the the string contents returned from the method is as expected as i want.
Please find the below code for generating the .js file:
public String generateJavaScriptContents(Project project)
{
try
{
// Creating projectId.js file
FileUtils.mkdir(outputDir);
fileOutputStream = new FileOutputStream(outputDir + project.getId() + ".js");
streamWriter = new OutputStreamWriter(fileOutputStream, "UTF-8");
StringTemplateGroup templateGroup =
new StringTemplateGroup("viTemplates", "/var/vi-xml/template/", DefaultTemplateLexer.class);
stringTemplate = templateGroup.getInstanceOf("StandardJSTemplate");
stringTemplate.setAttribute("projectIdVal", project.getId());
stringTemplate.setAttribute("widthVal", project.getDimension().getWidth());
stringTemplate.setAttribute("heightVal", project.getDimension().getHeight());
stringTemplate.setAttribute("playerVersionVal", project.getPlayerType().getId());
stringTemplate.setAttribute("finalTagPath", finalPathBuilder.toString());
streamWriter.append(stringTemplate.toString());
return stringTemplate.toString();
}
catch (Exception e)
{
logger.error("Exception occurred while generating Standard Tag Type Content", e);
return "";
}
}
The output of above method writes the .js file and the contents of that file are looks something below:
var projectid = 8; var playerwidth = 300; var playerheight =
250; var player_version = 1; .....
I have written the testMethod() using mockito to test this, however i am able to write the .js file successfully using the test method, but how do I verify its contents?
Can anyone help me to sort out this problem?
As #ŁukaszBachman mentions, you can read the contents from the js file. There are a couple of things to consider when using this approach:
The test will be slow, as you will have to wait for the js content to be written to the disk, read the content back from the disk and assert the content.
The test could theoretically be flaky because the entire js content may not be written to the disk by the time the code reads from the file. (On that note, you should probably consider calling flush() and close() on your OutputStreamWriter, if you aren't already.)
Another approach is to mock your OutputStreamWriter and inject it into the method. This would allow you to write test code similar to the following:
OutputStreamWriter mockStreamWriter = mock(OutputStreamWriter.class);
generateJavaScriptContents(mockStreamWriter, project);
verify(mockStreamWriter).append("var projectid = 8;\nvar playerwidth = 300;...");
http://mockito.googlecode.com/svn/branches/1.5/javadoc/org/mockito/Mockito.html#verify%28T%29
If you persist this *.js file on file system then simply create util method which will read it's contents and then use some sort of assertEquals to compare it with your fixed data.
Here is code for reading file contents into String.

Java : How to determine the correct charset encoding of a stream

With reference to the following thread:
Java App : Unable to read iso-8859-1 encoded file correctly
What is the best way to programatically determine the correct charset encoding of an inputstream/file ?
I have tried using the following:
File in = new File(args[0]);
InputStreamReader r = new InputStreamReader(new FileInputStream(in));
System.out.println(r.getEncoding());
But on a file which I know to be encoded with ISO8859_1 the above code yields ASCII, which is not correct, and does not allow me to correctly render the content of the file back to the console.
You cannot determine the encoding of a arbitrary byte stream. This is the nature of encodings. A encoding means a mapping between a byte value and its representation. So every encoding "could" be the right.
The getEncoding() method will return the encoding which was set up (read the JavaDoc) for the stream. It will not guess the encoding for you.
Some streams tell you which encoding was used to create them: XML, HTML. But not an arbitrary byte stream.
Anyway, you could try to guess an encoding on your own if you have to. Every language has a common frequency for every char. In English the char e appears very often but ê will appear very very seldom. In a ISO-8859-1 stream there are usually no 0x00 chars. But a UTF-16 stream has a lot of them.
Or: you could ask the user. I've already seen applications which present you a snippet of the file in different encodings and ask you to select the "correct" one.
I have used this library, similar to jchardet for detecting encoding in Java:
https://github.com/albfernandez/juniversalchardet
check this out:
http://site.icu-project.org/ (icu4j)
they have libraries for detecting charset from IOStream
could be simple like this:
BufferedInputStream bis = new BufferedInputStream(input);
CharsetDetector cd = new CharsetDetector();
cd.setText(bis);
CharsetMatch cm = cd.detect();
if (cm != null) {
reader = cm.getReader();
charset = cm.getName();
}else {
throw new UnsupportedCharsetException()
}
Here are my favorites:
TikaEncodingDetector
Dependency:
<dependency>
<groupId>org.apache.any23</groupId>
<artifactId>apache-any23-encoding</artifactId>
<version>1.1</version>
</dependency>
Sample:
public static Charset guessCharset(InputStream is) throws IOException {
return Charset.forName(new TikaEncodingDetector().guessEncoding(is));
}
GuessEncoding
Dependency:
<dependency>
<groupId>org.codehaus.guessencoding</groupId>
<artifactId>guessencoding</artifactId>
<version>1.4</version>
<type>jar</type>
</dependency>
Sample:
public static Charset guessCharset2(File file) throws IOException {
return CharsetToolkit.guessEncoding(file, 4096, StandardCharsets.UTF_8);
}
You can certainly validate the file for a particular charset by decoding it with a CharsetDecoder and watching out for "malformed-input" or "unmappable-character" errors. Of course, this only tells you if a charset is wrong; it doesn't tell you if it is correct. For that, you need a basis of comparison to evaluate the decoded results, e.g. do you know beforehand if the characters are restricted to some subset, or whether the text adheres to some strict format? The bottom line is that charset detection is guesswork without any guarantees.
Which library to use?
As of this writing, they are three libraries that emerge:
GuessEncoding
ICU4j
juniversalchardet
I don't include Apache Any23 because it uses ICU4j 3.4 under the hood.
How to tell which one has detected the right charset (or as close as possible)?
It's impossible to certify the charset detected by each above libraries. However, it's possible to ask them in turn and score the returned response.
How to score the returned response?
Each response can be assigned one point. The more points a response have, the more confidence the detected charset has. This is a simple scoring method. You can elaborate others.
Is there any sample code?
Here is a full snippet implementing the strategy described in the previous lines.
public static String guessEncoding(InputStream input) throws IOException {
// Load input data
long count = 0;
int n = 0, EOF = -1;
byte[] buffer = new byte[4096];
ByteArrayOutputStream output = new ByteArrayOutputStream();
while ((EOF != (n = input.read(buffer))) && (count <= Integer.MAX_VALUE)) {
output.write(buffer, 0, n);
count += n;
}
if (count > Integer.MAX_VALUE) {
throw new RuntimeException("Inputstream too large.");
}
byte[] data = output.toByteArray();
// Detect encoding
Map<String, int[]> encodingsScores = new HashMap<>();
// * GuessEncoding
updateEncodingsScores(encodingsScores, new CharsetToolkit(data).guessEncoding().displayName());
// * ICU4j
CharsetDetector charsetDetector = new CharsetDetector();
charsetDetector.setText(data);
charsetDetector.enableInputFilter(true);
CharsetMatch cm = charsetDetector.detect();
if (cm != null) {
updateEncodingsScores(encodingsScores, cm.getName());
}
// * juniversalchardset
UniversalDetector universalDetector = new UniversalDetector(null);
universalDetector.handleData(data, 0, data.length);
universalDetector.dataEnd();
String encodingName = universalDetector.getDetectedCharset();
if (encodingName != null) {
updateEncodingsScores(encodingsScores, encodingName);
}
// Find winning encoding
Map.Entry<String, int[]> maxEntry = null;
for (Map.Entry<String, int[]> e : encodingsScores.entrySet()) {
if (maxEntry == null || (e.getValue()[0] > maxEntry.getValue()[0])) {
maxEntry = e;
}
}
String winningEncoding = maxEntry.getKey();
//dumpEncodingsScores(encodingsScores);
return winningEncoding;
}
private static void updateEncodingsScores(Map<String, int[]> encodingsScores, String encoding) {
String encodingName = encoding.toLowerCase();
int[] encodingScore = encodingsScores.get(encodingName);
if (encodingScore == null) {
encodingsScores.put(encodingName, new int[] { 1 });
} else {
encodingScore[0]++;
}
}
private static void dumpEncodingsScores(Map<String, int[]> encodingsScores) {
System.out.println(toString(encodingsScores));
}
private static String toString(Map<String, int[]> encodingsScores) {
String GLUE = ", ";
StringBuilder sb = new StringBuilder();
for (Map.Entry<String, int[]> e : encodingsScores.entrySet()) {
sb.append(e.getKey() + ":" + e.getValue()[0] + GLUE);
}
int len = sb.length();
sb.delete(len - GLUE.length(), len);
return "{ " + sb.toString() + " }";
}
Improvements:
The guessEncoding method reads the inputstream entirely. For large inputstreams this can be a concern. All these libraries would read the whole inputstream. This would imply a large time consumption for detecting the charset.
It's possible to limit the initial data loading to a few bytes and perform the charset detection on those few bytes only.
As far as I know, there is no general library in this context to be suitable for all types of problems. So, for each problem you should test the existing libraries and select the best one which satisfies your problem’s constraints, but often none of them is appropriate. In these cases you can write your own Encoding Detector! As I have wrote ...
I’ve wrote a meta java tool for detecting charset encoding of HTML Web pages, using IBM ICU4j and Mozilla JCharDet as the built-in components. Here you can find my tool, please read the README section before anything else. Also, you can find some basic concepts of this problem in my paper and in its references.
Bellow I provided some helpful comments which I’ve experienced in my work:
Charset detection is not a foolproof process, because it is essentially based on statistical data and what actually happens is guessing not detecting
icu4j is the main tool in this context by IBM, imho
Both TikaEncodingDetector and Lucene-ICU4j are using icu4j and their accuracy had not a meaningful difference from which the icu4j in my tests (at most %1, as I remember)
icu4j is much more general than jchardet, icu4j is just a bit biased to IBM family encodings while jchardet is strongly biased to utf-8
Due to the widespread use of UTF-8 in HTML-world; jchardet is a better choice than icu4j in overall, but is not the best choice!
icu4j is great for East Asian specific encodings like EUC-KR, EUC-JP, SHIFT_JIS, BIG5 and the GB family encodings
Both icu4j and jchardet are debacle in dealing with HTML pages with Windows-1251 and Windows-1256 encodings. Windows-1251 aka cp1251 is widely used for Cyrillic-based languages like Russian and Windows-1256 aka cp1256 is widely used for Arabic
Almost all encoding detection tools are using statistical methods, so the accuracy of output strongly depends on the size and the contents of the input
Some encodings are essentially the same just with a partial differences, so in some cases the guessed or detected encoding may be false but at the same time be true! As about Windows-1252 and ISO-8859-1. (refer to the last paragraph under the 5.2 section of my paper)
The libs above are simple BOM detectors which of course only work if there is a BOM in the beginning of the file. Take a look at http://jchardet.sourceforge.net/ which does scans the text
If you use ICU4J (http://icu-project.org/apiref/icu4j/)
Here is my code:
String charset = "ISO-8859-1"; //Default chartset, put whatever you want
byte[] fileContent = null;
FileInputStream fin = null;
//create FileInputStream object
fin = new FileInputStream(file.getPath());
/*
* Create byte array large enough to hold the content of the file.
* Use File.length to determine size of the file in bytes.
*/
fileContent = new byte[(int) file.length()];
/*
* To read content of the file in byte array, use
* int read(byte[] byteArray) method of java FileInputStream class.
*
*/
fin.read(fileContent);
byte[] data = fileContent;
CharsetDetector detector = new CharsetDetector();
detector.setText(data);
CharsetMatch cm = detector.detect();
if (cm != null) {
int confidence = cm.getConfidence();
System.out.println("Encoding: " + cm.getName() + " - Confidence: " + confidence + "%");
//Here you have the encode name and the confidence
//In my case if the confidence is > 50 I return the encode, else I return the default value
if (confidence > 50) {
charset = cm.getName();
}
}
Remember to put all the try-catch need it.
I hope this works for you.
If you don't know the encoding of your data, it is not so easy to determine, but you could try to use a library to guess it. Also, there is a similar question.
I found a nice third party library which can detect actual encoding:
http://glaforge.free.fr/wiki/index.php?wiki=GuessEncoding
I didn't test it extensively but it seems to work.
For ISO8859_1 files, there is not an easy way to distinguish them from ASCII. For Unicode files however one can generally detect this based on the first few bytes of the file.
UTF-8 and UTF-16 files include a Byte Order Mark (BOM) at the very beginning of the file. The BOM is a zero-width non-breaking space.
Unfortunately, for historical reasons, Java does not detect this automatically. Programs like Notepad will check the BOM and use the appropriate encoding. Using unix or Cygwin, you can check the BOM with the file command. For example:
$ file sample2.sql
sample2.sql: Unicode text, UTF-16, big-endian
For Java, I suggest you check out this code, which will detect the common file formats and select the correct encoding: How to read a file and automatically specify the correct encoding
An alternative to TikaEncodingDetector is to use Tika AutoDetectReader.
Charset charset = new AutoDetectReader(new FileInputStream(file)).getCharset();
A good strategy to handle this, is with a way to auto detect the input charset.
I use org.xml.sax.InputSource in Java 11 to solve it:
...
import org.xml.sax.InputSource;
...
InputSource inputSource = new InputSource(inputStream);
inputStreamReader = new InputStreamReader(
inputSource.getByteStream(), inputSource.getEncoding()
);
Input sample:
<?xml version="1.0" encoding="utf-16"?>
<rss xmlns:dc="https://purl.org/dc/elements/1.1/" version="2.0">
<channel>
...**strong text**
In plain Java:
final String[] encodings = { "US-ASCII", "ISO-8859-1", "UTF-8", "UTF-16BE", "UTF-16LE", "UTF-16" };
List<String> lines;
for (String encoding : encodings) {
try {
lines = Files.readAllLines(path, Charset.forName(encoding));
for (String line : lines) {
// do something...
}
break;
} catch (IOException ioe) {
System.out.println(encoding + " failed, trying next.");
}
}
This approach will try the encodings one by one until one works or we run out of them.
(BTW my encodings list has only those items because they are the charsets implementations required on every Java platform, https://docs.oracle.com/javase/9/docs/api/java/nio/charset/Charset.html)
Can you pick the appropriate char set in the Constructor:
new InputStreamReader(new FileInputStream(in), "ISO8859_1");

Categories

Resources