I am using Jasper reports for a project that needs both PDF and CSV output and the majority of the data is the Detail section, within a table. I know you can remove the pageHeader and columnHeader at the document level, but is it possible to remove, or only print once, the column headers within a table? If not the CSV outputs,
User Type,Time,Username,Event,IP Address,Student Name,Student Number
Admin,6/6/11 8:09 PM,admin,Uploaded a report file.,0:0:0:0:0:0:0:1,,
....[about 20 more lines of CSV then]....
User Type,Time,Username,Event,IP Address,Student Name,Student Number
This just looks very unprofessional and isn't very functional. Like I said I know the page level headers can be removed with:
jasperPrint.getPropertiesMap().setProperty("net.sf.jasperreports.export.exclude.origin.band.1", "pageHeader");
jasperPrint.getPropertiesMap().setProperty("net.sf.jasperreports.export.exclude.origin.band.2", "pageFooter");
jasperPrint.getPropertiesMap().setProperty("net.sf.jasperreports.export.csv.exclude.origin.band.1", "columnHeader");
jasperPrint.getPropertiesMap().setProperty("net.sf.jasperreports.export.csv.exclude.origin.band.2", "pageFooter");
jasperPrint.getPropertiesMap().setProperty("net.sf.jasperreports.export.csv.exclude.origin.keep.first.band.1", "columnHeader");
but I am looking for a solution to remove them on table for CSV output only, not PDF. Is this possible?
Any help would be greatly appreciated!
Thanks,
Chuck
Some useful properties to control report export for different formats.
net.sf.jasperreports.export.xls.exclude.origin.band.1=title
net.sf.jasperreports.export.xls.exclude.origin.band.2=summary
net.sf.jasperreports.export.xls.exclude.origin.band.3=pageHeader
net.sf.jasperreports.export.xls.exclude.origin.band.4=pageFooter
net.sf.jasperreports.export.xls.exclude.origin.keep.first.band.1=columnHeader
net.sf.jasperreports.export.xls.collapse.row.span=false
net.sf.jasperreports.export.xls.remove.empty.space.between.columns=true
net.sf.jasperreports.export.csv.exclude.origin.band.csvSummary=summary
net.sf.jasperreports.export.csv.exclude.origin.band.1=title
net.sf.jasperreports.export.csv.exclude.origin.band.2=pageFooter
net.sf.jasperreports.export.csv.exclude.origin.keep.first.band.1=columnHeader
net.sf.jasperreports.export.xls.exclude.origin.band.1=title
net.sf.jasperreports.export.xls.exclude.origin.band.2=summary
net.sf.jasperreports.export.xls.exclude.origin.band.3=pageHeader
net.sf.jasperreports.export.xls.exclude.origin.band.4=pageFooter
net.sf.jasperreports.export.xls.exclude.origin.keep.first.band.1=columnHeader
net.sf.jasperreports.export.xls.collapse.row.span=false
net.sf.jasperreports.export.xls.remove.empty.space.between.columns=true
net.sf.jasperreports.export.html.using.images.to.align=false
net.sf.jasperreports.export.html.remove.emtpy.space.between.rows=true
net.sf.jasperreports.export.ignore.page.margins=true
Full reference.
Column headers in the table component are meant to be repeated when the table overflows and cannot be hidden. To achieve what you want you could either:
move the contents of your columnHeader into the tableHeader so that only the table header prints once
or filter out the elements when performing a specific export by adding sets of properties like these:
<property name="net.sf.jasperreports.export.pdf.exclude.origin.keep.first.band.1" value="columnHeader"/>
<property name="net.sf.jasperreports.export.pdf.exclude.origin.keep.first.report.1" value="*"/>
More info on filtering elements at export time here and here.
Maybe you should use different report definitions for each output. If not, then you could just recognise when you're printing to csv and only set those properties then.
Related
I'm using com.extjs.gxt.ui.client.widget.form.ComboBox GWT version 2.8.2 and gxt 2.3.1a .
I have 2 comboboxes that have the same storage with many values. When I do search in one combobox I see in debug ComboBox lastquery value same as in search string and only single found value in the CombobBox store. But same value I see in other CombobBox which use same storage. So when I'm trying to work with other comboboxes connected (by listener) to second combobox I get an error because I can't load the required values.
com.google.gwt.core.client.JavaScriptException: (TypeError) : Cannot read properties of null (reading 'getValue__Ljava_lang_Object_2')
at Unknown.TypeError: Cannot read properties of null(Unknown#-1)
Maybe some option exists to allow to use of the same storage in several ComboBoxes without affecting each other, i.e. not limit storage with filtered by search string values? Using 2 storages by creating 2 copies is not preferable. Thanks in advance.
UPD: combobox2.getValue() gives null when search was performed earlier. How I can reload storage in this case?
UPD: I created 2 comboboxes with the same storage:
private final ListStore<ComboBoxItemModel<String>> countryStore;
public ComboBox<ComboBoxItemModel<String>> createCountryCombobox() {
ComboBox<ComboBoxItemModel<String>> countriesComboBox = new ComboBox<>();
countriesComboBox.setFieldLabel("Country");
countriesComboBox.setStore(countryStore);
countriesComboBox.setAllowBlank(false);
countriesComboBox.setTriggerAction(ComboBox.TriggerAction.ALL);
countriesComboBox.setDisplayField(ComboBoxItemModel.COMBO_BOX_ITEM_NAME);
countriesComboBox.setValue(countriesComboBox.getStore().getAt(0));
countriesComboBox.setForceSelection(true);
countriesComboBox.setEditable(true);
return countriesComboBox;
}
setEditable(true) option allows to search countries in the list, but it delete values in the storage as result. So I can't load values for other fields connected to second combobox without manual selection in combobox2.
Seems using two storage is quite a simple solution. Other solutions that I tried look like a workaround. I just create copies and it works fine:
ListStore<ComboBoxItemModel<String>> store = new ListStore<>();
store.add(countryStore.getModels());
countriesComboBox.setStore(store);
I tried to delete specific document from SOLR admin console with the following approach and query executed successfully but document could not deleted.
Go to the admin console.
Go to the document page.
Selected /update handler.
Selected document type as XML
Commit true
Commit within 1000
query: <delete><query>source_type:xyz</query></delete>
Note:
In this image source_type:xyz not showing but we tried with <delete><query>source_type:xyz</query></delete>query.
source_type is one of the field in our schema.
We will really appreciate if some one can help us on this.
I found the way for the same double quotes worked for me.
<delete><query>source_type:"xyz"</query></delete>
Performing a test with BIRT I was able to create a report and render it in PDF, but unfortunatelly I'm not getting the expected result.
For my DataSource I created a Scripted DataSource and no code was needed in there (as far as I could see the documentation to achieve what I'm trying to do).
For my DataSet I create a Scripted DataSet using my Scripted DataSource as source. In there I defined the script for open like:
importPackage(Packages.org.springframework.context);
importPackage(Packages.org.springframework.web.context.support);
var sc = reportContext.getHttpServletRequest().getSession().getServletContext();
var spring = WebApplicationContextUtils.getWebApplicationContext(sc);
myPojo = spring.getBean("myDao").findById(params["pojoId"]);
And script for fetch like:
if(myPojo != null){
row["title"] = myPojo.getTitle();
myPojo = null;
return true;
}
return false;
As the population of row will be done on runtime, I wasn't able to automatically get the DataSet columns, so I created one with the following configuration: name: columnTitle (as this is the name used to populated row object in fetch code).
Afterwards I edited the layout of my report and added the column to my layout.
I was able to confirm that spring.getBean("myDao").findById(params["pojoId"]); is executed. But my rendered report is not showing the title. If I double click on my column label on report layout I can see there that expression is dataSetRow["columnTitle"] is it right? Even I'm using row in my fetch script? What am I missing here?
Well, what is conctractVersion?
It is obviously not initialized in the open event.
Should this read myPojo.contractVersion or perhaps myPojo.getContractVersion()?
Another point: Is the DS with the column "columnTitle" bound to the layout?
You should also run your report as HTML or in the previewer to check for script errors.
Unfortunately, these are silently ignored when generating the report as PDF...
The problem was the use of batik twice (two different versions), one dependency for BIRT and other for DOCX4J.
The issue is quite difficult to identify because there is no log output rendering PDF files.
Rendering HTML I could see an error message which I could investigate and find the problem.
For my case I could remove the DocX4j from maven POM.
I'm trying to create a field called "sku" - which is indexed with the following analyzer:
<fieldType name="sku" class="solr.TextField">
<analyzer>
<tokenizer class="solr.PatternTokenizerFactory" pattern="(SKU|Part(\sNumber)?):?\s(\[0-9-\]+)" group="3"/>
</analyzer>
</fieldType>
This is from reading the documentation here http://lucidworks.lucidimagination.com/display/solr/Tokenizers#Tokenizers-RegularExpressionPatternTokenizer
I already have a Java program that is posting to the solr server succesfully, however it is not grabbing the sku out of any files, and indexing them. Here is my Java code:
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest(
"/update/extract");
up.addFile(arg0, arg0.getName());
up.setParam("literal.id", arg0.getName());
up.setParam("uprefix", "attr_");
up.setParam("fmap.content", "attr_content");
up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
server.request(up);
Any help appreciated.
I understand I can parse the text files myself and extract the SKU and post them in the parameters to the server, but I thought Solr could do this for me?
It is hard to tell what is going on, because there is a several steps in the middle.
For example, what's your schema.xml definition. Is it definitely using sku as its type (and not say string). Then, what's the field name (attr_sku?) and does the extract handler mapping actually maps to it properly? The extract handler usually sends metadata as individual fields and then all file content as one big long field. Is sku somewhere in metadata?
I would do a copyField into something non-processing and see whether the content actually makes it into Solr field. Then, I would start troubleshooting the regex itself.
I have the next error.
I'm trying to create a PDF using iText, with an specific format. I opted to use tables for each section of the page, because the format that I need to do have tables. All right, I already did everything, I create the tables and adding it with the doc.add(table) method, this worked fine, but I needed to set the tables into an specific position. So I opted to use table.writeSelectedRows() method, and this worked fine.
And here comes the error, this is my code:
table_SectionTwo.addCell(cell_White);
table_SectionTwo.addCell(cell_White);
table_SectionTwo.addCell(p);
table_SectionTwo.addCell(cell_OrderDate);
table_SectionTwo.addCell(cell_CustomerOrderDate);
table_SectionTwo.addCell(cell_OrderNumberSection);
float[] columnWidths = new float[] {38f, 105f, 90f};
table_SectionTwo.setTotalWidth(columnWidths);
table_SectionTwo.setLockedWidth(true);
table_SectionTwo.completeRow();
table_SectionTwo.writeSelectedRows(0, -1, 260f, 770f, super.getPdfWriter().getDirectContent());
doc.add(table_SectionTwo);
As you can see, if I execute this code, this will add the same table 2 times
the problem is when I remove doc.add(table), I do this only for add one table into an specific position using table.writeSelectedRows(). This is how my code remains:
table_SectionTwo.writeSelectedRows(0, -1, 260f, 770f, super.getPdfWriter().getDirectContent());
//super.addTable(table_SectionTwo);
I commented doc.add(table).
And this should write only one table. But this doesn't work. When I do this throws:
ExceptionConverter: java.io.IOException: The document has no pages.
at com.itextpdf.text.pdf.PdfPages.writePageTree(PdfPages.java:113)
at com.itextpdf.text.pdf.PdfWriter.close(PdfWriter.java:1217)
at com.itextpdf.text.pdf.PdfDocument.close(PdfDocument.java:777)
at com.itextpdf.text.Document.close(Document.java:398)
at PDFConstructor.CloseDocument(PDFConstructor.java:85)
at InvoicePDF.CloseDocument(InvoicePDF.java:58)
at Demo.main(Demo.java:72)
The curious thing is when I comment the doc.add(table) this doesn't work, and when I comment the table.writeSelectedRows() the doc.add(table) works fine.
This error occurs only when I have doc.add(table) commented and table.writeSelectedRows() uncommented.
Please help me..
Although you don't give sufficient information in your question, I think the problem is caused by the fact that you don't define the width of the table.
Do this test: ask the table for its total height. If iText returns 0, then you forgot to define the width of the table; if it doesn't return 0, then iText knows the width either because you defined it explicitly, or because you used document.add(table), which calculated the dimensions of the table based on the page metrics of the document object.
If something else is at play, you'll have to provide more info.
What i understand from your question that you want to write specific rows to document but at a specified position.If this is correct super.getPdfWriter().getDirectContent()) is that necessary?i don't think so.To analyse this part i need your whole code snippet or a demo version of this code which explain the same.
2nd:Internally itext also use PdfContentByte to write PdfPTable using PdfPRow & also Remember according to the author(Bruno) itext is built on Builder Pattern.If previous lines have no meaning to you skip it.
You currently adding content to table even before its all required property is set.
table_SectionTwo.addCell(cell_White);
table_SectionTwo.addCell(cell_White);
table_SectionTwo.addCell(p);
table_SectionTwo.addCell(cell_OrderDate);
table_SectionTwo.addCell(cell_CustomerOrderDate);
table_SectionTwo.addCell(cell_OrderNumberSection);
It will be something like this
float[] columnWidths = new float[] {38f, 105f, 90f};
PdfPTable table_SectionTwo= new PdfPTable(clmnWdthTpHdr);
table_SectionTwo.setTotalWidth(500.0f);
table_SectionTwo.setWidthPercentage(100.0f);
table_SectionTwo.setLockedWidth(true);
3.Don't use super.getPdfWriter().getDirectContent()).As the above code shows me you are using document so i think you must write following code snippet too(Something like this:lol)
PdfWriter pdfWrtr=null;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
Document doc= new Document(UtilConstant.pageSizePdf,0,0,0,0);
pdfWrtr=PdfWriter.getInstance(doc,baos);
In try catch use pdfWrtr.getDirectContent(); instead.
These all are based on my code analysis.
Also another point from the exception
ExceptionConverter: java.io.IOException: The document has no pages.
at com.itextpdf.text.pdf.PdfPages.writePageTree(PdfPages.java:113)
...............................
at InvoicePDF.CloseDocument(InvoicePDF.java:58)
at Demo.main(Demo.java:72)
It is a typical error when nothing is added to the document.Maybe an exception is thrown (and ignored) in step 4(according to Itext in Action) andmaybe you are executing step 5 (document.close()) anyway(in spite of the exception in step 4).So please attach Demo.java
if the above is not clear enough to help you.