My Question: How do you run and download a report to text? In BuessinessObjects, you can download reports as a plain text file. And their documentation for the API indicates that you can download reports in various formats. How is this accomplished?
How to download them as PDF: In their documentation, they describe how to download them as a PDF file:
ViewSupport pdfViewSupport = new ViewSupport();
pdfViewSupport.setOutputFormat(OutputFormatType.PDF);
pdfViewSupport.setViewType(ViewType.BINARY);
pdfViewSupport.setViewMode(ViewModeType.DOCUMENT);
RetrieveBinaryView retBinView = new RetrieveBinaryView();
retBinView.setViewSupport(pdfViewSupport);
RetrieveData retBOData = new RetrieveData();
retBOData.setRetrieveView(retBinView);
DocumentInformation docInfo = boReportEngine.getDocumentInformation(struid, null, null, null, retBOData);
BinaryView myBOView = (BinaryView) boDocInfo.getView();
byte[] docContents = myBOView.getContent();
When I change:
pdfViewSupport.setOutputFormat(OutputFormatType.PDF);
pdfViewSupport.setViewType(ViewType.BINARY);
pdfViewSupport.setViewMode(ViewModeType.DOCUMENT);
to
pdfViewSupport.setOutputFormat(OutputFormatType.INT_HTML);
pdfViewSupport.setViewType(ViewType.INT_CHARACTER);
pdfViewSupport.setViewMode(ViewModeType.INT_REPORT_PAGE);
I get the following error:
org.apache.axis2.AxisFault: Binary view of such a document should be only requested with the use of ViewType.BINARY (WRE 01151)
The funny thing is that I set the ViewType to be INT_CHARACTER, not BINARY...
It breaks on the line:
DocumentInformation docInfo = boReportEngine.getDocumentInformation(struid, null, null, null, retBOData);
What I'm trying to do: It's really complicated, but I basically want to have a report which returns a single row and just prints that on the report (nothing else) and then download that report as text because the text is xml which I use in another program.
Any help would be great!
Note: I'm running on a 3.2 server, but we'll be upgrading to 4.0 pretty soon. So if the solution could work for both versions, that'd be awesome, otherwise a v4 and v3.x solution would be awesome as well :)
The problem this line:
RetrieveBinaryView retBinView = new RetrieveBinaryView();
So I don't know how this all works, but this is what you're looking for:
ViewSupport viewSupport = ViewSupport.Factory.newInstance();
viewSupport.setOutputFormat(OutputFormatType.INT_XML);
viewSupport.setViewType(ViewType.INT_CHARACTER);
viewSupport.setViewMode(ViewModeType.INT_REPORT_PAGE);
RetrieveData retBOData = RetrieveData.Factory.newInstance();
RetrieveXMLView retXMLView = RetrieveXMLView.Factory.newInstance();
retXMLView.setViewSupport(viewSupport);
retBOData.setRetrieveView(retXMLView);
DocumentInformation boDocInfo = getDocInfo(actions, retBOData);
XMLView bView = (XMLView) boDocInfo.getView();
ByteArrayOutputStream out = new ByteArrayOutputStream(bView.getContentLength());
bView.getContent().save(out);
byte[] reportBytes = out.toByteArray();
String reportInString = new String(reportBytes);
However, the reportInString here is XML representing the report. So what I did was encapsulate what I want out of the report with a prefix and suffix. So, for example, let's say I encapsulate it with ##$ThisIsWhatYouWant- and -End$##, then I would do the following:
Pattern patt = Pattern.compile("(?i)##$ThisIsWhatYouWant-(.*?)-End$##", Pattern.DOTALL);
Matcher match = patt.matcher(reportInString);
if (match.find()) {
return match.group(1);
}
return null;
P.S. This should work for both 3.x and 4.0 BO Servers.
Related
I have created a PDF from a word document using POI4XPages api.
here is the code:
var template = poiBean.buildResourceTemplateSource(null,"purchaseorder.docx");
var result = poiBean.processDocument2Stream(template, lst);
var is:java.io.InputStream = new java.io.ByteArrayInputStream(result.toByteArray());
var os:java.io.OutputStream = poiBean.buildPDFFromDocX(is)
As you can see the result of my code is an OutputStream, The next step for me is to convert the stream to an attachment and attach it to a notesdocument but don't know how to do that. It doesn't really matter if I first need to attach it to disc or if it written to a body field immediately.
The poiBean is described here
https://github.com/OpenNTF/POI4Xpages/blob/master/biz.webgate.dominoext.poi/src/biz/webgate/dominoext/poi/beans/PoiBean.java
I am using SSJS here but I guess a java solution would work as well.
thanks
Thomas
Some copying and pasting but this is how you stream it into an richtext field but you need to convert os to an inputstream and assign this to a variable called is2
var stream:NotesStream = session.createStream();
session.setConvertMIME(false);
var doc:NotesDocument = database.createDocument();
var body:NotesMIMEEntity = doc.createMIMEEntity();
stream.setContents(is2); // is an inputstream
body.setContentFromBytes(stream, "application/octet-stream",NotesMIMEEntity.ENC_IDENTITY_BINARY);
stream.close();
doc.save(true, true);
session.setConvertMIME(true);
This is what I based the example on
https://openntf.org/XSnippets.nsf/snippet.xsp?id=create-html-mails-in-ssjs-using-mime
I am writing a Java program using the SVNKit API, and I need to use the correct class or call in the API that would allow me to find the diff between files stored in separate locations.
1st file:
https://abc.edc.xyz.corp/svn/di-edc/tags/ab-cde-fgh-axsym-1.0.0/src/site/apt/releaseNotes.apt
2nd file:
https://abc.edc.xyz.corp/svn/di-edc/tags/ab-cde-fgh-axsym-1.1.0/src/site/apt/releaseNotes.apt
I have used the listed API calls to generate the diff output, but I am unsuccessful so far.
DefaultSVNDiffGenerator diffGenerator = new DefaultSVNDiffGenerator();
diffGenerator.displayFileDiff("", file1, file2, "10983", "8971", "text", "text/plain", output);
diffClient.doDiff(svnUrl1, SVNRevision.create(10868), svnUrl2, SVNRevision.create(8971), SVNDepth.IMMEDIATES, false, System.out);
Can anyone provide guidance on the correct way to do this?
Your code looks correct. But prefer using the new API:
final SvnOperationFactory svnOperationFactory = new SvnOperationFactory();
try {
final ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
final SvnDiffGenerator diffGenerator = new SvnDiffGenerator();
diffGenerator.setBasePath(new File(""));
final SvnDiff diff = svnOperationFactory.createDiff();
diff.setSources(SvnTarget.fromURL(url1, svnRevision1), SvnTarget.fromURL(url2, svnRevision1));
diff.setDiffGenerator(diffGenerator);
diff.setOutput(byteArrayOutputStream);
diff.run();
} finally {
svnOperationFactory.dispose();
}
I have a 'small' problem. In a database documents contain a richtextfield. The richtextfield contains a profile picture of a certain contact. The problem is that this content is not saved as mime and therefore I can not calculate the url of the image.
I'm using a pojo to retrieve data from the person profile and use this in my xpage control to display its contents. I need to build a convert agent which takes the content of the richtextitem and converts it to mime to be able to calculate the url something like
http://host/database.nsf/($users)/D40FE4181F2B86CCC12579AB0047BD22/Photo/M2?OpenElement
Could someone help me with converting the contents of the richtextitem to mime? When I check for embedded objects in the rt field there are none. When I get the content of the field as stream and save it to a new richtext field using the following code. But the new field is not created somehow.
System.out.println("check if document contains a field with name "+fieldName);
if(!doc.hasItem(fieldName)){
throw new PictureConvertException("Could not locate richtextitem with name"+fieldName);
}
RichTextItem pictureField = (RichTextItem) doc.getFirstItem(fieldName);
System.out.println("Its a richtextfield..");
System.out.println("Copy field to backup field");
if(doc.hasItem("old_"+fieldName)){
doc.removeItem("old_"+fieldName);
}
pictureField.copyItemToDocument(doc, "old_"+fieldName);
// Vector embeddedPictures = pictureField.getEmbeddedObjects();
// System.out.println(doc.hasEmbedded());
// System.out.println("Retrieved embedded objects");
// if(embeddedPictures.isEmpty()){
// throw new PictureConvertException("No embedded objects could be found.");
// }
//
// EmbeddedObject photo = (EmbeddedObject) embeddedPictures.get(0);
System.out.println("Create inputstream");
//s.setConvertMime(false);
InputStream iStream = pictureField.getInputStream();
System.out.println("Create notesstream");
Stream nStream = s.createStream();
nStream.setContents(iStream);
System.out.println("Create mime entity");
MIMEEntity mEntity = doc.createMIMEEntity("PictureTest");
MIMEHeader cdheader = mEntity.createHeader("Content-Disposition");
System.out.println("Set header withfilename picture.gif");
cdheader.setHeaderVal("attachment;filename=picture.gif");
System.out.println("Setcontent type header");
MIMEHeader cidheader = mEntity.createHeader("Content-ID");
cidheader.setHeaderVal("picture.gif");
System.out.println("Set content from stream");
mEntity.setContentFromBytes(nStream, "application/gif", mEntity.ENC_IDENTITY_BINARY);
System.out.println("Save document..");
doc.save();
//s.setConvertMime(true);
System.out.println("Done");
// Clean up if we are done..
//doc.removeItem(fieldName);
Its been a little while now and I didn't go down the route of converting existing data to mime. I could not get it to work and after some more research it seemed to be unnecessary. Because the issue is about displaying images bound to a richtextbox I did some research on how to compute the url for an image and I came up with the following lines of code:
function getImageURL(doc:NotesDocument, strRTItem,strFileType){
if(doc!=null && !"".equals(strRTItem)){
var rtItem = doc.getFirstItem(strRTItem);
if(rtItem!=null){
var personelDB = doc.getParentDatabase();
var dbURL = getDBUrl(personelDB);
var imageURL:java.lang.StringBuffer = new java.lang.StringBuffer(dbURL);
if("file".equals(strFileType)){
var embeddedObjects:java.util.Vector = rtItem.getEmbeddedObjects();
if(!embeddedObjects.isEmpty()){
var file:NotesEmbeddedObject = embeddedObjects.get(0);
imageURL.append("(lookupView)\\");
imageURL.append(doc.getUniversalID());
imageURL.append("\\$File\\");
imageURL.append(file.getName());
imageURL.append("?Open");
}
}else{
imageURL.append(doc.getUniversalID());
imageURL.append("/"+strRTItem+"/");
if(rtItem instanceof lotus.domino.local.RichTextItem){
imageURL.append("0.C4?OpenElement");
}else{
imageURL.append("M2?OpenElement");
}
}
return imageURL.toString()
}
}
}
It will check if a given RT field is present. If this is the case it assumes a few things:
If there are files in the rtfield the first file is the picture to display
else it will create a specified url if the item is of type Rt otherwhise it will assume it is a mime entity and will generate another url.
Not sure if this is an answer but I can't seem to add comments yet. Have you verified that there is something in your stream?
if (stream.getBytes() != 0) {
The issue cannot be resolved "ideally" in Java.
1) if you convert to MIME, you screw up the original Notes rich text. MIME allows only for sad approximation of original content; this might or might not matter.
If it matters, it's possible to convert a copy of the original field to MIME used only for display purposes, or scrape it out using DXL and storing separately - however this approach again means an issue of synchronization every time somebody changes the image in the original RT item.
2) computing URL as per OP code in the accepted self-answer is not possible in general as the constant 0.C4 in this example relates to the offset of the image in binary data of the RT item. Meaning any other design of rich text field, manually entered images, created by different version of Notes - all influence the offset.
3) the url can be computed correctly only by using C API that allows to investigate binary data in rich text item. This cannot be done from Java. IMO (without building JNI bridges etc)
I am trying to use boilerpipe java library, to extract news articles from a set of websites.
It works great for texts in english, but for text with special characters, for example, words with accent marks (história), this special characters are not extracted correctly. I think it is an encoding problem.
In the boilerpipe faq, it says "If you extract non-English text you might need to change some parameters" and then refers to a paper. I found no solution in this paper.
My question is, are there any params when using boilerpipe where i can specify the encoding? Is there any way to go around and get the text correctly?
How i'm using the library:
(first attempt based on the URL):
URL url = new URL(link);
String article = ArticleExtractor.INSTANCE.getText(url);
(second on the HTLM source code)
String article = ArticleExtractor.INSTANCE.getText(html_page_as_string);
You don't have to modify inner Boilerpipe classes.
Just pass InputSource object to the ArticleExtractor.INSTANCE.getText() method and force encoding on that object. For example:
URL url = new URL("http://some-page-with-utf8-encodeing.tld");
InputSource is = new InputSource();
is.setEncoding("UTF-8");
is.setByteStream(url.openStream());
String text = ArticleExtractor.INSTANCE.getText(is);
Regards!
Well, from what I see, when you use it like that, the library will auto-chose what encoding to use. From the HTMLFetcher source:
public static HTMLDocument fetch(final URL url) throws IOException {
final URLConnection conn = url.openConnection();
final String ct = conn.getContentType();
Charset cs = Charset.forName("Cp1252");
if (ct != null) {
Matcher m = PAT_CHARSET.matcher(ct);
if(m.find()) {
final String charset = m.group(1);
try {
cs = Charset.forName(charset);
} catch (UnsupportedCharsetException e) {
// keep default
}
}
}
Try debugging their code a bit, starting with ArticleExtractor.getText(URL), and see if you can override the encoding
Ok, got a solution.
As Andrei said, i had to change the class HTMLFecther, which is in the package de.l3s.boilerpipe.sax
What i did was to convert all the text that was fetched, to UTF-8.
At the end of the fetch function, i had to add two lines, and change the last one:
final byte[] data = bos.toByteArray(); //stays the same
byte[] utf8 = new String(data, cs.displayName()).getBytes("UTF-8"); //new one (convertion)
cs = Charset.forName("UTF-8"); //set the charset to UFT-8
return new HTMLDocument(utf8, cs); // edited line
Boilerpipe's ArticleExtractor uses some algorithms that have been specifically tailored to English - measuring number of words in average phrases, etc. In any language that is more or less verbose than English (ie: every other language) these algorithms will be less accurate.
Additionally, the library uses some English phrases to try and find the end of the article (comments, post a comment, have your say, etc) which will clearly not work in other languages.
This is not to say that the library will outright fail - just be aware that some modification is likely needed for good results in non-English languages.
Java:
import java.net.URL;
import org.xml.sax.InputSource;
import de.l3s.boilerpipe.extractors.ArticleExtractor;
public class Boilerpipe {
public static void main(String[] args) {
try{
URL url = new URL("http://www.azeri.ru/az/traditions/kuraj_pehlevanov/");
InputSource is = new InputSource();
is.setEncoding("UTF-8");
is.setByteStream(url.openStream());
String text = ArticleExtractor.INSTANCE.getText(is);
System.out.println(text);
}catch(Exception e){
e.printStackTrace();
}
}
}
Eclipse:
Run > Run Configurations > Common Tab. Set Encoding to Other(UTF-8), then click Run.
I had the some problem; the cnr solution works great. Just change UTF-8 encoding to ISO-8859-1. Thank's
URL url = new URL("http://some-page-with-utf8-encodeing.tld");
InputSource is = new InputSource();
is.setEncoding("ISO-8859-1");
is.setByteStream(url.openStream());
String text = ArticleExtractor.INSTANCE.getText(is);
I'm starting to design an application, that will, in part, run through a directory of files and compare their extensions to their file headers.
Does anyone have any advice as to the best way to approach this? I know I could simply have a lookup table that will contain the file's header signature. e.g., JPEG: \xFF\xD8\xFF\xE0
I was hoping there might be a simper way.
Thanks in advance for your help.
I'm afraid it'll have to be more complicated than that. Not every file type has a header at all, and some (such as RAR) have their characteristic data structures at the end rather than at the beginning.
You may want to take a look at the Unix file command, which does the same job:
http://linux.die.net/man/1/file
http://linux.die.net/man/5/magic
If you don't need to do dirty work on these values (and you don't have linux) you could simply use an external program, like TrID, that is able to do this thing for you.
Maybe you can just work on its output without caring to doing it by yourself.. in anycase if you have just around 20 kinds of files that you will have to manage having a simple lookup table (eg. HashMap<String,byte[]>) is not that bad. Of cours this will work only if desidered file format has a magic number, otherwise you are on your own (or with an external program).
Because of the problem with the missing significant header for some file types (thanks #Michael) I would create a map of extension to a kind of type checker with a simple API like
public interface TypeCheck throws IOException {
public boolean isValid(InputStream data);
}
Now you can code something like
File toBeTested = ...;
Map<String,TypeCheck> typeCheckByExtension = ...;
TypeCheck check = typeCheckByExtension.get(getExtension(toBeTested.getName()));
if (check != null) {
InputStream in = new FileInputStream(toBeTested);
if (check.isValid(in)) {
// process valid file
} else {
// process invalid file
}
in.close();
} else {
// process unknown file
}
The Header check for JPEG for example may look like
public class JpegTypeCheck implements TypeCheck {
private static final byte[] HEADER = new byte[] {0xFF, 0xD8, 0xFF, 0xE0};
public boolean isValid(InputStream data) throws IOException {
byte[] header = new byte[4];
return data.read(header) == 4 && Arrays.equals(header, HEADER);
}
}
For other types with no significant header you can implement completly other type checks.
You can extract the mime type for each file and compare this to a map of mimetype/extension (Map<String, List<String>>, the first String is the mime type, the second is a list of valid extensions).
Resources :
Get the Mime Type from a File
JMimeMagic
On the same topic :
Java - HowTo extract MimeType from a byte[]
Getting A File's Mime Type In Java
You can know the file type of file reading the header using apache tika. Following code need apache tika jar.
InputStream is = MainApp.class.getResourceAsStream("/NetFx20SP1_x64.txt");
BufferedInputStream bis = new BufferedInputStream(is);
AutoDetectParser parser = new AutoDetectParser();
Detector detector = parser.getDetector();
Metadata md = new Metadata();
md.add(Metadata.RESOURCE_NAME_KEY,MainApp.class.getResource("/NetFx20SP1_x64.txt").getPath());
MediaType mediaType = detector.detect(bis, md);
System.out.println("MIMe Type of File : " + mediaType.toString());