base64 image decoder for ASP classic - java

Can any one tell me how to decode a base64 encoded image in classic ASP? The image is encoded by Java org.apache base64 class. The Java uses RFC 2045 for base64 decoding.

<%
Set objXML = Server.CreateObject("MSXml2.DOMDocument")
Set objDocElem = objXML.createElement("Base64Data")
objDocElem.DataType = "bin.base64"
objDocElem.text = "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAUD" 'encodedString
'Save to disk
Set objStream = Server.CreateObject("ADODB.Stream")
objStream.Type = 1
objStream.Open
objStream.Write objDocElem.NodeTypedValue
objStream.SaveToFile "abc.jpg", 2
set objStream = Nothing
'Or send to browser
Response.ContentType = "image/jpeg"
Response.AddHeader "Content-Disposition", "attachment; filename=abc.jpg";
Response.BinaryWrite objDocElem.NodeTypedValue
Set objXML = Nothing
Set objDocElem = Nothing
%>

You can use the Capicom COM object. I've been using it to to the reverse (base64 encoding).
This is what I would do (if you've got a big loop, you'd better have the CreateObject done outside the loop, but in simple cases this should do it):
Function Base64Decode(encodedString)
Dim caputil : Set caputil = CreateObject("CAPICOM.Utilities")
If len(encodedString) > 0 Then
Base64Decode = caputil.Base64Decode(encodedString)
Else
Base64Decode = ""
End If
Set caputil = Nothing
End Property
Reference : http://msdn.microsoft.com/en-us/library/aa388176(v=vs.85).aspx
By the way, capicom.dll can be downloaded from MS site : http://www.microsoft.com/downloads/en/details.aspx?FamilyID=860ee43a-a843-462f-abb5-ff88ea5896f6

Related

MimeBodyPart.getInputStream only returns the first 8192 bytes of an email attachment

I am using MimeBodyPart.getInputStream to retrieve a file attached to an incoming email. Anytime the attached file is larger than 8192 bytes (8 KiB), the rest of the data is lost. fileInput.readAllBytes().length seems to always be min(fileSize, 8192). My relevant code looks like this
val part = multiPart.getBodyPart(i).asInstanceOf[MimeBodyPart]
if (Part.ATTACHMENT.equalsIgnoreCase(part.getDisposition)) {
val filePath = fileManager.generateRandomUniqueFilename
val fileName = part.getFileName
val fileSize = part.getSize
val fileContentType = part.getContentType
val fileInput = part.getInputStream
doSomething(filePath, fileInput.readAllBytes(), fileName, fileSize, fileContentType)
}
Note that the variable fileSize contains the right value (e. g. 63209 for a roughly 64kb file). I've tried this with two different mail servers yielding the same result. In the documentation I cannot find anything about a 8KiB limit. What is happening here?
Note: When I use part.getRawInputStream I receive the full data!

FileOutputStream in dart flutter

what's the equivalent fileoutputstream of java in dart?
Java code
file = new FileOutputStream(logFile, true);
byte[] input = "String".getBytes();
file.write(input);
java file output
String
Ive tried this in dart
Dart code
var file = File(logFile!.path).openWrite();
List input = "String".codeUnits;
file.write(input);
[String]
and every time I open the file again to append "String2" and "String3" to it, the output will be
[String][String2][String3]
as oppose to java's output
StringString3String3
to sum it up, is there a way to fix/workaround this?
why each array bytes written in dart will be a new array instead of append into an existing one?
You can achieve that by using File.writeAsString() and using FileMode.append.
Picking up your example, this would be:
var file = File(logFile!.path);
file.writeAsString("String", mode: FileMode.append);
did you try writeAsString() ?
import 'dart:io';
void main() async {
final filename = 'file.txt';
var file = await File(filename).writeAsString('some content');
// Do something with the file.
}

Word to PDF to Notes Document using the POI4Xpages api

I have created a PDF from a word document using POI4XPages api.
here is the code:
var template = poiBean.buildResourceTemplateSource(null,"purchaseorder.docx");
var result = poiBean.processDocument2Stream(template, lst);
var is:java.io.InputStream = new java.io.ByteArrayInputStream(result.toByteArray());
var os:java.io.OutputStream = poiBean.buildPDFFromDocX(is)
As you can see the result of my code is an OutputStream, The next step for me is to convert the stream to an attachment and attach it to a notesdocument but don't know how to do that. It doesn't really matter if I first need to attach it to disc or if it written to a body field immediately.
The poiBean is described here
https://github.com/OpenNTF/POI4Xpages/blob/master/biz.webgate.dominoext.poi/src/biz/webgate/dominoext/poi/beans/PoiBean.java
I am using SSJS here but I guess a java solution would work as well.
thanks
Thomas
Some copying and pasting but this is how you stream it into an richtext field but you need to convert os to an inputstream and assign this to a variable called is2
var stream:NotesStream = session.createStream();
session.setConvertMIME(false);
var doc:NotesDocument = database.createDocument();
var body:NotesMIMEEntity = doc.createMIMEEntity();
stream.setContents(is2); // is an inputstream
body.setContentFromBytes(stream, "application/octet-stream",NotesMIMEEntity.ENC_IDENTITY_BINARY);
stream.close();
doc.save(true, true);
session.setConvertMIME(true);
This is what I based the example on
https://openntf.org/XSnippets.nsf/snippet.xsp?id=create-html-mails-in-ssjs-using-mime

Google App Engine - Data being stored in a weird way

I'm using Java. This is the pure data that gets inserted in the datastore:
<p>Something</p>\n<p>That</p>\n<p> </p>\n<p>Should.</p>\n<p> </p>\n
<p>I have an interesting question.</p>\n<p>Why are you like this?</p>\n
<p> </p>\n<p>Aren't you fine?</p>
This is how it gets stored:
<p>Something</p> <p>That</p> <p>�</p> <p>Should.</p> <p>�</p>
<p>I have an interesting question.</p> <p>Why are you like this?</p>
<p>�</p> <p>Aren't you fine?</p>
What's up with the weird symbols? This happens only live, not on my local dev_appserver.
EDIT
Here's the code that inserts the data:
String content = ""; // this is where the data is stored
try {
ServletFileUpload upload = new ServletFileUpload();
FileItemIterator iter = upload.getItemIterator(request);
while(iter.hasNext()) {
FileItemStream item = iter.next();
InputStream stream = item.openStream();
if(item.isFormField()) {
String fieldName = item.getFieldName();
String fieldValue = new String(IOUtils.toByteArray(stream), "utf-8");
LOG.info("Got a form field: " +fieldName+" with value: "+fieldValue);
// assigning the value
if(fieldName.equals("content")) content = fieldValue;
} else {
...
}
}
} catch (FileUploadException e){
}
...
// insert it in datastore
Recipe recipe = new Recipe(user.getKey(), title, new Text(content), new Text(ingredients), tagsAsStrings);
pm.makePersistent(recipe);
It's a multipart/form-data form so I have to do that little item.isFormField() magic to get the actual content, and construct a String. Maybe that's causing the weird encoding issue? Not sure.
To retrieve the data I simply do:
<%=recipe.getContent().getValue()%>
Since content is of type Text (app engine type) I use the .getValue() to get the actual result. I don't think it's an issue with retrieving the data, since I can see the weird characters directly in the online app-engine datastore viewer.
Are you using eclipse ? if yes check under File > Properties > Text File encoding that your file is UTF-8 encoding.
I would guess not.
So, change it to UTF-8 and your issue should get fixed.
regards
didier
Followed this page to create a Servlet Filter so that all my pages were being encoded in utf8:
How to get UTF-8 working in Java webapps?
After creating the filter, everything works!

Java : How to determine the correct charset encoding of a stream

With reference to the following thread:
Java App : Unable to read iso-8859-1 encoded file correctly
What is the best way to programatically determine the correct charset encoding of an inputstream/file ?
I have tried using the following:
File in = new File(args[0]);
InputStreamReader r = new InputStreamReader(new FileInputStream(in));
System.out.println(r.getEncoding());
But on a file which I know to be encoded with ISO8859_1 the above code yields ASCII, which is not correct, and does not allow me to correctly render the content of the file back to the console.
You cannot determine the encoding of a arbitrary byte stream. This is the nature of encodings. A encoding means a mapping between a byte value and its representation. So every encoding "could" be the right.
The getEncoding() method will return the encoding which was set up (read the JavaDoc) for the stream. It will not guess the encoding for you.
Some streams tell you which encoding was used to create them: XML, HTML. But not an arbitrary byte stream.
Anyway, you could try to guess an encoding on your own if you have to. Every language has a common frequency for every char. In English the char e appears very often but ê will appear very very seldom. In a ISO-8859-1 stream there are usually no 0x00 chars. But a UTF-16 stream has a lot of them.
Or: you could ask the user. I've already seen applications which present you a snippet of the file in different encodings and ask you to select the "correct" one.
I have used this library, similar to jchardet for detecting encoding in Java:
https://github.com/albfernandez/juniversalchardet
check this out:
http://site.icu-project.org/ (icu4j)
they have libraries for detecting charset from IOStream
could be simple like this:
BufferedInputStream bis = new BufferedInputStream(input);
CharsetDetector cd = new CharsetDetector();
cd.setText(bis);
CharsetMatch cm = cd.detect();
if (cm != null) {
reader = cm.getReader();
charset = cm.getName();
}else {
throw new UnsupportedCharsetException()
}
Here are my favorites:
TikaEncodingDetector
Dependency:
<dependency>
<groupId>org.apache.any23</groupId>
<artifactId>apache-any23-encoding</artifactId>
<version>1.1</version>
</dependency>
Sample:
public static Charset guessCharset(InputStream is) throws IOException {
return Charset.forName(new TikaEncodingDetector().guessEncoding(is));
}
GuessEncoding
Dependency:
<dependency>
<groupId>org.codehaus.guessencoding</groupId>
<artifactId>guessencoding</artifactId>
<version>1.4</version>
<type>jar</type>
</dependency>
Sample:
public static Charset guessCharset2(File file) throws IOException {
return CharsetToolkit.guessEncoding(file, 4096, StandardCharsets.UTF_8);
}
You can certainly validate the file for a particular charset by decoding it with a CharsetDecoder and watching out for "malformed-input" or "unmappable-character" errors. Of course, this only tells you if a charset is wrong; it doesn't tell you if it is correct. For that, you need a basis of comparison to evaluate the decoded results, e.g. do you know beforehand if the characters are restricted to some subset, or whether the text adheres to some strict format? The bottom line is that charset detection is guesswork without any guarantees.
Which library to use?
As of this writing, they are three libraries that emerge:
GuessEncoding
ICU4j
juniversalchardet
I don't include Apache Any23 because it uses ICU4j 3.4 under the hood.
How to tell which one has detected the right charset (or as close as possible)?
It's impossible to certify the charset detected by each above libraries. However, it's possible to ask them in turn and score the returned response.
How to score the returned response?
Each response can be assigned one point. The more points a response have, the more confidence the detected charset has. This is a simple scoring method. You can elaborate others.
Is there any sample code?
Here is a full snippet implementing the strategy described in the previous lines.
public static String guessEncoding(InputStream input) throws IOException {
// Load input data
long count = 0;
int n = 0, EOF = -1;
byte[] buffer = new byte[4096];
ByteArrayOutputStream output = new ByteArrayOutputStream();
while ((EOF != (n = input.read(buffer))) && (count <= Integer.MAX_VALUE)) {
output.write(buffer, 0, n);
count += n;
}
if (count > Integer.MAX_VALUE) {
throw new RuntimeException("Inputstream too large.");
}
byte[] data = output.toByteArray();
// Detect encoding
Map<String, int[]> encodingsScores = new HashMap<>();
// * GuessEncoding
updateEncodingsScores(encodingsScores, new CharsetToolkit(data).guessEncoding().displayName());
// * ICU4j
CharsetDetector charsetDetector = new CharsetDetector();
charsetDetector.setText(data);
charsetDetector.enableInputFilter(true);
CharsetMatch cm = charsetDetector.detect();
if (cm != null) {
updateEncodingsScores(encodingsScores, cm.getName());
}
// * juniversalchardset
UniversalDetector universalDetector = new UniversalDetector(null);
universalDetector.handleData(data, 0, data.length);
universalDetector.dataEnd();
String encodingName = universalDetector.getDetectedCharset();
if (encodingName != null) {
updateEncodingsScores(encodingsScores, encodingName);
}
// Find winning encoding
Map.Entry<String, int[]> maxEntry = null;
for (Map.Entry<String, int[]> e : encodingsScores.entrySet()) {
if (maxEntry == null || (e.getValue()[0] > maxEntry.getValue()[0])) {
maxEntry = e;
}
}
String winningEncoding = maxEntry.getKey();
//dumpEncodingsScores(encodingsScores);
return winningEncoding;
}
private static void updateEncodingsScores(Map<String, int[]> encodingsScores, String encoding) {
String encodingName = encoding.toLowerCase();
int[] encodingScore = encodingsScores.get(encodingName);
if (encodingScore == null) {
encodingsScores.put(encodingName, new int[] { 1 });
} else {
encodingScore[0]++;
}
}
private static void dumpEncodingsScores(Map<String, int[]> encodingsScores) {
System.out.println(toString(encodingsScores));
}
private static String toString(Map<String, int[]> encodingsScores) {
String GLUE = ", ";
StringBuilder sb = new StringBuilder();
for (Map.Entry<String, int[]> e : encodingsScores.entrySet()) {
sb.append(e.getKey() + ":" + e.getValue()[0] + GLUE);
}
int len = sb.length();
sb.delete(len - GLUE.length(), len);
return "{ " + sb.toString() + " }";
}
Improvements:
The guessEncoding method reads the inputstream entirely. For large inputstreams this can be a concern. All these libraries would read the whole inputstream. This would imply a large time consumption for detecting the charset.
It's possible to limit the initial data loading to a few bytes and perform the charset detection on those few bytes only.
As far as I know, there is no general library in this context to be suitable for all types of problems. So, for each problem you should test the existing libraries and select the best one which satisfies your problem’s constraints, but often none of them is appropriate. In these cases you can write your own Encoding Detector! As I have wrote ...
I’ve wrote a meta java tool for detecting charset encoding of HTML Web pages, using IBM ICU4j and Mozilla JCharDet as the built-in components. Here you can find my tool, please read the README section before anything else. Also, you can find some basic concepts of this problem in my paper and in its references.
Bellow I provided some helpful comments which I’ve experienced in my work:
Charset detection is not a foolproof process, because it is essentially based on statistical data and what actually happens is guessing not detecting
icu4j is the main tool in this context by IBM, imho
Both TikaEncodingDetector and Lucene-ICU4j are using icu4j and their accuracy had not a meaningful difference from which the icu4j in my tests (at most %1, as I remember)
icu4j is much more general than jchardet, icu4j is just a bit biased to IBM family encodings while jchardet is strongly biased to utf-8
Due to the widespread use of UTF-8 in HTML-world; jchardet is a better choice than icu4j in overall, but is not the best choice!
icu4j is great for East Asian specific encodings like EUC-KR, EUC-JP, SHIFT_JIS, BIG5 and the GB family encodings
Both icu4j and jchardet are debacle in dealing with HTML pages with Windows-1251 and Windows-1256 encodings. Windows-1251 aka cp1251 is widely used for Cyrillic-based languages like Russian and Windows-1256 aka cp1256 is widely used for Arabic
Almost all encoding detection tools are using statistical methods, so the accuracy of output strongly depends on the size and the contents of the input
Some encodings are essentially the same just with a partial differences, so in some cases the guessed or detected encoding may be false but at the same time be true! As about Windows-1252 and ISO-8859-1. (refer to the last paragraph under the 5.2 section of my paper)
The libs above are simple BOM detectors which of course only work if there is a BOM in the beginning of the file. Take a look at http://jchardet.sourceforge.net/ which does scans the text
If you use ICU4J (http://icu-project.org/apiref/icu4j/)
Here is my code:
String charset = "ISO-8859-1"; //Default chartset, put whatever you want
byte[] fileContent = null;
FileInputStream fin = null;
//create FileInputStream object
fin = new FileInputStream(file.getPath());
/*
* Create byte array large enough to hold the content of the file.
* Use File.length to determine size of the file in bytes.
*/
fileContent = new byte[(int) file.length()];
/*
* To read content of the file in byte array, use
* int read(byte[] byteArray) method of java FileInputStream class.
*
*/
fin.read(fileContent);
byte[] data = fileContent;
CharsetDetector detector = new CharsetDetector();
detector.setText(data);
CharsetMatch cm = detector.detect();
if (cm != null) {
int confidence = cm.getConfidence();
System.out.println("Encoding: " + cm.getName() + " - Confidence: " + confidence + "%");
//Here you have the encode name and the confidence
//In my case if the confidence is > 50 I return the encode, else I return the default value
if (confidence > 50) {
charset = cm.getName();
}
}
Remember to put all the try-catch need it.
I hope this works for you.
If you don't know the encoding of your data, it is not so easy to determine, but you could try to use a library to guess it. Also, there is a similar question.
I found a nice third party library which can detect actual encoding:
http://glaforge.free.fr/wiki/index.php?wiki=GuessEncoding
I didn't test it extensively but it seems to work.
For ISO8859_1 files, there is not an easy way to distinguish them from ASCII. For Unicode files however one can generally detect this based on the first few bytes of the file.
UTF-8 and UTF-16 files include a Byte Order Mark (BOM) at the very beginning of the file. The BOM is a zero-width non-breaking space.
Unfortunately, for historical reasons, Java does not detect this automatically. Programs like Notepad will check the BOM and use the appropriate encoding. Using unix or Cygwin, you can check the BOM with the file command. For example:
$ file sample2.sql
sample2.sql: Unicode text, UTF-16, big-endian
For Java, I suggest you check out this code, which will detect the common file formats and select the correct encoding: How to read a file and automatically specify the correct encoding
An alternative to TikaEncodingDetector is to use Tika AutoDetectReader.
Charset charset = new AutoDetectReader(new FileInputStream(file)).getCharset();
A good strategy to handle this, is with a way to auto detect the input charset.
I use org.xml.sax.InputSource in Java 11 to solve it:
...
import org.xml.sax.InputSource;
...
InputSource inputSource = new InputSource(inputStream);
inputStreamReader = new InputStreamReader(
inputSource.getByteStream(), inputSource.getEncoding()
);
Input sample:
<?xml version="1.0" encoding="utf-16"?>
<rss xmlns:dc="https://purl.org/dc/elements/1.1/" version="2.0">
<channel>
...**strong text**
In plain Java:
final String[] encodings = { "US-ASCII", "ISO-8859-1", "UTF-8", "UTF-16BE", "UTF-16LE", "UTF-16" };
List<String> lines;
for (String encoding : encodings) {
try {
lines = Files.readAllLines(path, Charset.forName(encoding));
for (String line : lines) {
// do something...
}
break;
} catch (IOException ioe) {
System.out.println(encoding + " failed, trying next.");
}
}
This approach will try the encodings one by one until one works or we run out of them.
(BTW my encodings list has only those items because they are the charsets implementations required on every Java platform, https://docs.oracle.com/javase/9/docs/api/java/nio/charset/Charset.html)
Can you pick the appropriate char set in the Constructor:
new InputStreamReader(new FileInputStream(in), "ISO8859_1");

Categories

Resources