Does the .prpt file (report template file) contain datasource information? - java

When I created a report using Pentaho Report Designer, it outputs a report file having .prpt extension. After that I found an example on internet where the following code was used to display the report in html format:
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
ResourceManager manager = new ResourceManager();
manager.registerDefaults();
String reportPath = "file:" +
this.getServletContext().getRealPath("sampleReport.prpt");
try {
Resource res = manager.createDirectly(new URL(reportPath), MasterReport.class);
MasterReport report = (MasterReport) res.getResource();
HtmlReportUtil.createStreamHTML(report, response.getOutputStream());
} catch (Exception e) {
e.printStackTrace();
}
}
And the report got printed successfully. So as we haven't specified any datasource information here, I think that the .prpt file contains that information in it.
If that's true, then Isn't Jasper is better Reporting tool than Pentaho because when we display Jasper reports, we have to provide datasource details also so in that way our report is flexible and is not bound to any particular database.

Nope. The data source can be stored in the prpt, but it can be passed to the report too. And the usual way is to simply use JNDI so that you can deploy the same report, to multiple test/dev/production environments.
You'll probably get better quicker answers from the forum. forums.pentaho.org

The PRPT file usually contains all information that is needed to run the report. You can provide your own datasources by modifying the MasterReport object that you get back from the ResourceManager.
However, I yet have to see valid use cases where that kind of operation actually makes sense. To provide connection information for SQL datasources at runtime you usually use the JNDI subsystem of your web-application or J2EE server.
99.99% of all reports that run on the Pentaho BI-Server do NOT have a need to manually replace datasources to run. And the remaining 0.01% are legacy reports from ancient reporting engine versions.

Related

Connecting to an embedded OrientDB server in Java

I'm looking to run a Java process on several machines, each of which will need to start a local OrientBD server, load a graph, perform our processes, then close. As such, I need to be able to embed the OServer start process from within Java.
There is plenty of advice about how to do so, including SA questions, however most seem to be out of date (so please don't mark this as a duplicate prematurely). The most directly relevant seems to be this, however it doesn't work - at least for me. With the below code, I get the subsequent error:
try {
final OServer server = OServerMain.create();
server.startup(server.getClass().getResourceAsStream("/orientdb-server-config.xml"));
server.activate();
} catch (Exception e) {
e.printStackTrace();
System.exit(-1);
}
2021-12-07 21:47:39:323 INFO Loading configuration from input stream [OServerConfigurationLoaderXml]
2021-12-07 21:47:39:633 INFO OrientDB Server v3.2.3 (build dc98198215aa57baf29b32adb657dc3733acdb55, branch develop) is starting up... [OServer]java.lang.NullPointerException
at com.orientechnologies.orient.core.Orient.onEmbeddedFactoryInit(Orient.java:957)
at com.orientechnologies.orient.core.db.OrientDBEmbedded.<init>(OrientDBEmbedded.java:97)
at com.orientechnologies.orient.core.db.OrientDBInternal.embedded(OrientDBInternal.java:119)
at com.orientechnologies.orient.server.OServer.startupFromConfiguration(OServer.java:388)
at com.orientechnologies.orient.server.OServer.startup(OServer.java:314)
at ems.definitions.instance.Graph.<init>(Graph.java:47)
I am using OrientDB version 3.2.3; the 'ALL' .jar downloaded from here. Note that this jar does not contain the parameters file orientdb-server-config.xml, so I have downloaded it directly from the source GitHub.
Is there an issue with my specific implementation, my approach in general or with the default config file I'm using? I look forward to hearing your thoughts.
The issue was three-fold:
I was using the 'ALL' .jar provided by the website. Instead I needed to use the libraries provided in the full source.
I did not account for the fact that when the code failed, it did not delete the database it half-created, thus could not execute the code I tried to remedy. I had to implement a temporary fail-safe to drop the database prior to initialisation to avoid this.
I was using the wrong(?) strategy in general.
My working method is as below.
orientDB = new OrientDB("embedded:/tmp/","admin","adminpwd", OrientDBConfig.defaultConfig());
/** THIS IS VERY MUCH ONLY FOR LOCAL TESTING **/
if(orientDB.exists(name))
orientDB.drop(name);
if(!orientDB.exists(name)) // if the database does not already exist, create it.
orientDB.execute("create database " + name + " PLOCAL users ( admin identified by 'adminpwd' role admin)");
db = orientDB.open(name, "admin", "adminpwd");

Getting CPU 100 percent when I am trying to downloading CSV in Spring

I am getting CPU performance issue on server when I am trying to download CSV in my project, CPU goes 100% but SQL returns the response within 1 minute. In the CSV we are writing around 600K records for one user it is working fine but for concurrent users we are getting this issue.
Environment
Spring 4.2.5
Tomcat 7/8 (RAM 2GB Allocated)
MySQL 5.0.5
Java 1.7
Here is the Spring Controller code:-
#RequestMapping(value="csvData")
public void getCSVData(HttpServletRequest request,
HttpServletResponse response,
#RequestParam(value="param1", required=false) String param1,
#RequestParam(value="param2", required=false) String param2,
#RequestParam(value="param3", required=false) String param3) throws IOException{
List<Log> logs = service.getCSVData(param1,param2,param3);
response.setHeader("Content-type","application/csv");
response.setHeader("Content-disposition","inline; filename=logData.csv");
PrintWriter out = response.getWriter();
out.println("Field1,Field2,Field3,.......,Field16");
for(Log row: logs){
out.println(row.getField1()+","+row.getField2()+","+row.getField3()+"......"+row.getField16());
}
out.flush();
out.close();
}}
Persistance Code:- I am using spring JDBCTemplate
#Override
public List<Log> getCSVLog(String param1,String param2,String param3) {
String sql =SqlConstants.CSV_ACTIVITY.toString();
List<Log> csvLog = JdbcTemplate.query(sql, new Object[]{param1, param2, param3},
new RowMapper<Log>() {
#Override
public Log mapRow(ResultSet rs, int rowNum)
throws SQLException {
Log log = new Log();
log.getField1(rs.getInt("field1"));
log.getField2(rs.getString("field2"));
log.getField3(rs.getString("field3"));
.
.
.
log.getField16(rs.getString("field16"));
}
return log;
}
});
return csvLog;
}
I think you need to be specific on what you meant by "100% CPU usage" whether it's the Java process or MySQL server. As you have got 600K records, trying to load everything in to memory would easily end up in OutOfMemoryError. Given that this works for one user means that you've got enough heap space to process this number of records for just one user and symptoms surface when there are multiple users trying to use the same service.
First issue I can see in your posted code is that you try to load everything into one big list and the size of the list varies based on the content of the Log class. Using a list like this also means that you have to have enough memory to process JDBC result set and generate new list of Log instances. This can be a major problem with a growing number of users. This type of short-lived objects will cause frequent GC and once GC cannot keep up with the amount of garbage being collected it fails obviously. To solve this major issue my suggestion is to use ScrollableResultSet. Additionally you can make this result set read-only, for example below is code fragment for creating a scrollable result set. Take a look at the documentation for how to use it.
Statement st = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
Above option is suitable if you're using pure JDBC or SpringJDBC template. If Hibernate is already used in your project you can still achieve the same this with the below code fragment. Again please check the documentation for more information and you have a different JPA provider.
StatelessSession session = sessionFactory.openStatelessSession();
Query query = session.createSQLQuery(queryStr).setCacheable(false).setFetchSize(Integer.MIN_VALUE).setReadOnly(true);
query.setParameter(query_param_key, query_paramter_value);
ScrollableResults resultSet = query.scroll(ScrollMode.FORWARD_ONLY);
This way you're not loading all the records to Java process in one go, instead you they're loaded on demand and will have small memory footprint at any given time. Note that JDBC connection will be open until you're done with processing the entire record set. This also means that your DB connection pool can be exhausted if many users are going to download CSV files from this endpoint. You need to take measures to overcome this problem (i.e use of an API manager to rate limit the calls to this endpoint, reading from a read-replica or whatever viable option).
My other suggestion is to stream data which you have already done, so that any records fetched from the DB are processed and sent to client before the next set of records are processed. Again I would suggest you to use a CSV library such as SuperCSV to handle this as these libraries are designed to handle a good load of data.
Please note that this answer may not exactly answer your question as you haven't provided necessary parts of your source such as how to retrieve data from DB but will give the right direction to solve this issue
Your problem in loading all data on application server from database at once, try to run query with limit and offset parameters (with mandatory order by), push loaded records to client and load next part of data with different offset. It help you decrease memory footprint and will not required keep connection to database open all the time. Of course, database will loaded a bit more, but maybe whole situation will better. Try different limit values, for example 5K-50K and monitor cpu usage - on both app server and database.
If you can allow keep many open connection to database #Bunti answer is very good.
http://dev.mysql.com/doc/refman/5.7/en/select.html

GWT file upload

I have a following problem - we are using FormPanel which sends file to the Servlet which takes the arguments and tries to parse XML from this file. This works fine.
Problem is when the user uploaded a wrong file, so parsing ends with SAXException which I would like to propagate (or the exception's message) to client. I tried something like
catch (SAXException ex) {
response.setStatus(HttpServletResponse.SC_UNSUPPORTED_MEDIA_TYPE);
response.flushBuffer();
}
but it's not working, I always get empty tag pre (<pre></pre>). I am trying to catch this with
formPanel.addSubmitCompleteHandler(new SubmitCompleteHandler() {
#Override
public void onSubmitComplete(SubmitCompleteEvent event) {
String s = event.getResults();
});
I can use response.getWriter().write("Error"); in my Servlet but how the client will know if the error really occured or not?Using something like event.getResults().contains("error") doesn't seem to me as a correct solution.
So I am thinking about using RequestBuilder but I don't see a way how could I get the the uploaded file and push it to my servlet. Or maybe converting my message to JSON would help?
You should refer to this thread on the google gwt discussion group. The way you described, parsing the event.getResults() to determine if there was an error or the result in case of a success is the correct way to do it, even though it might seem barbaric.
As suggested in the linked discussion, you can look into GWT Upload for cleaner code, as well as upload progress information. I believe your only two options to upload files to a server from a web page are forms or Flash.

Is it possible to supply a new PropertiesConfiguration file at runtime?

Background:
I have a requirement that messages displayed to the user must vary both by language and by company division. Thus, I can't use out of the box resource bundles, so I'm essentially writing my own version of resource bundles using PropertiesConfiguration files.
In addition, I have a requirement that messages must be modifiable dynamically in production w/o doing restarts.
I'm loading up three different iterations of property files:
-basename_division.properties
-basename_2CharLanguageCode.properties
-basename.properties
These files exist in the classpath. This code is going into a tag library to be used by multiple portlets in a Portal.
I construct the possible .properties files, and then try to load each of them via the following:
PropertiesConfiguration configurationProperties;
try {
configurationProperties = new PropertiesConfiguration(propertyFileName);
configurationProperties.setReloadingStrategy(new FileChangedReloadingStrategy());
} catch (ConfigurationException e) {
/* This is ok -- it just means that the specific configuration file doesn't
exist right now, which will often be true. */
return(null);
}
If it did successfully locate a file, it saves the created PropertiesConfiguration into a hashmap for reuse, and then tries to find the key. (Unlike regular resource bundles, if it doesn't find the key, it then tries to find the more general file to see if the key exists in that file -- so that only override exceptions need to be put into language/division specific property files.)
The Problem:
If a file did not exist the first time it was checked, it throws the expected exception. However, if at a later time a file is then later dropped into the classpath and this code is then re-run, the exception is still thrown. Restarting the portal obviously clears the problem, but that's not useful to me -- I need to be able to allow them to drop new messages in place for language/companyDivision overrides w/o a restart. And I'm not that interested in creating blank files for all possible divisions, since there are quite a few divisions.
I'm assuming this is a classLoader issue, in that it determines that the file did not exist in the classpath the first time, and caches that result when trying to reload the same file. I'm not interested in doing anything too fancy w/ the classLoader. (I'd be the only one who would be able to understand/maintain that code.) The specific environment is WebSphere Portal.
Any ways around this or am I stuck?
My guess is that I am not sure if Apache's FileChangedReloadingStrategy also reports the events of ENTRY_CREATE on a file system directory.
If you're using Java 7, I propose to try the following. Simply, implement a new ReloadingStrategy using Java 7 WatchService. In this way, every time either a file is changed in your target directories or a new property file is placed there, you poll for the event and able to add the properties to your application.
If not on Java 7, maybe using a library such as JNotify would be a better solution to get the event of a new entry in a directory. But again, you need to implement the ReloadingStrategy.
UPDATE for Java 6:
PropertiesConfiguration configurationProperties;
try {
configurationProperties = new PropertiesConfiguration(propertyFileName);
configurationProperties.setReloadingStrategy(new FileChangedReloadingStrategy());
} catch (ConfigurationException e) {
JNotify.addWatch(propertyFileDirectory, JNotify.FILE_CREATED, false, new FileCreatedListener());
}
where
class FileCreatedListener implements JNotifyListener {
// other methods
public void fileCreated(int watchId, String rootPath, String fileName) {
configurationProperties = new PropertiesConfiguration(rootPath + "/" + fileName);
configurationProperties.setReloadingStrategy(new FileChangedReloadingStrategy());
// or any other business with configurationProperties
}
}

lazily compile JasperReports .jrxml to .jasper

I use Jasper reports with the JasperReportsMultiFormatView class provided by the Spring framework. This class takes care of compiling the source .jrxml files to their compiled .jasper format when the Spring application context is created.
However, this compilation process is really slowing down the application startup time. Is it possible for the reports to be lazily compiled instead of compiled at startup time, i.e. a report is only compiled the first time it is requested?
If this is not possible, alternative suggestions for how I can reduce/eliminate the report compilation time would be welcome. Of course, I could mandate that the compiled reports must be checked into SVN along with the .jrxml files, but it's only a matter of time, before someone (most likely me) forgets.
Cheers,
Don
I, like you, started out with the Spring helper classes for Jasper Reports but quickly abandoned them as being too coarse-grained and inflexible, which is unusual for Spring. Its like they were added as an afterthought.
The big problem I had with them was that once they were compiled, it required an appserver bounce to put in new versions. In my case, I was after a solution whereby I could change them on disk and they'd recompile, much like how JSPs normally work (if you don't turn this feature off, which many production sites would).
Alternatively, I wanted to be able to store the jrxml files in a database or run the reports remotely (eg through the JasperServer web services interface). The Spring classes just made it all but impossible to implement such features.
So my suggestion to you is: roll your own. There are a couple of gotchas along the way though, which I'll share with you to minimize the pain. Some of these things aren't obvious from the documentation.
The first thing you'll need is a jasper reports compiler. This is responsible for compiling a jrxml file into a JasperDesign object. There are several implemenations of this but the one you want is the JRJdtCompiler. You can instantiate and inject this in a Spring application context. Avoid others like the beanshell compiler since running the report as a large beanshell script is not particularly fast or efficient (I found this out the hard way before I knew any better).
You will need to include the jar files for the JRJdtCompiler. I think the full Jasper Reports dist includes this jar. Its an eclipse product.
You can store the JasperDesign anywhere you like (HttpSession, servlet context or whatever). The fillReport() method is the primary one you're interested in. It creates a JasperPrint object, which is an instance of a run report. Parameters are just passed in as a Map.
Now to create a versino in HTML, PDF, etc, you need to export it. You use classes like the JRHtmlExporter and JRPdfExporter to do this. They require certain parameters. The tricky one is the HTML exporter because HTML obviously doesn't include the images. Jasper includes an ImageServlet class that fetches these from the session (where the JRHtmlExporter has put them) but you have to get the config of both the HTML exporter and image servlet just right and its hard to tell where you're going wrong.
I don't remember the specifics of it but theres an example of all this in the Jasper Reports Definitive Guide, which I'd highly recommend you get if you're spending anytime at all with this product. Its fairly cheap at US$50. You could get the annual subscription too but in the 18+ months I've seen it I haven't seen a single change. Just buy the new version when it comes out if you need it (which you probably won't).
Hope this helps.
The report is compiled the first time its run, put a break point in AbstractJasperReportsView protected final JasperReport loadReport(Resource resource) method to confirm this.
However the above post is correct that you'll need to extend the JasperReportsMultiFormatView if you want to provide any specific compilation process.
A great example of dynamic compilation is here: http://javanetspeed.blogspot.com/2013/01/jasper-ireport-with-java-spring-and.html
import net.sf.jasperreports.engine.JasperReport;
import org.apache.log4j.Logger;
import org.springframework.web.servlet.view.jasperreports.JasperReportsMultiFormatView;
public class DynamicJasperReportsMultiFormatView extends JasperReportsMultiFormatView {
private static final Logger LOG = Logger.getLogger(DynamicJasperReportsMultiFormatView.class);
/**
* The JasperReport that is used to render the view.
*/
private JasperReport jasperReport;
/**
* The last modified time of the jrxml resource file, used to force compilation.
*/
private long jrxmlTimestamp;
#Override
protected void onInit() {
jasperReport = super.getReport();
try {
String url = getUrl();
if (url != null) {
jrxmlTimestamp = getApplicationContext().getResource(url).getFile().lastModified();
}
} catch (Exception e) {
e = null;
}
}
#Override
protected JasperReport getReport() {
if (this.isDirty()) {
LOG.info("Forcing recompilation of jasper report as the jrxml has changed");
this.jasperReport = this.loadReport();
}
return this.jasperReport;
}
/**
* Determines if the jrxml file is dirty by checking its timestamp.
*
* #return true to force recompilation because the report xml has changed, false otherwise
*/
private boolean isDirty() {
long curTimestamp = 0L;
try {
String url = getUrl();
if (url != null) {
curTimestamp = getApplicationContext().getResource(url).getFile().lastModified();
if (curTimestamp > jrxmlTimestamp) {
jrxmlTimestamp = curTimestamp;
return true;
}
}
} catch (Exception e) {
e = null;
}
return false;
}
}

Categories

Resources