I was quite happy with my JSF app which read the contents of MQ messages received and supplied them to the UI like this:
<rich:panel>
<snip>
<rich:panelMenuItem label="mylabel" action="#{MyBacking.updateCurrent}">
<f:param name="current" value="mylog.log" />
</rich:panelMenuItem>
</snip>
</rich:panel>
<rich:panel>
<a4j:outputPanel ajaxRendered="true">
<rich:insert content="#{MyBacking.log}" highlight="groovy" />
</a4j:outputPanel>
</rich:panel>
and in MyBacking.java
private String logFile = null;
...
public String updateCurrent() {
FacesContext context=FacesContext.getCurrentInstance();
setCurrent((String)context.getExternalContext().getRequestParameterMap().get("current"));
setLog(getCurrent());
return null;
}
public void setLog(String log) {
sendMsg(log);
msgBody = receiveMsg(moreargs);
logFile = msgBody;
}
public String getLog() {
return logFile;
}
until the contents of one of the messages was too big and tomcat fell over. Obviously, I thought, I need to change the way it works so that I return some form of stream so that no one object grows so big that the container dies and the content returned by successive messages is streamed to the UI as it comes in.
Am I right in thinking that I can replace the work I'm doing now on a String object with a BufferedOutputStream object ie no change to the JSF code and something like this changing at the back end:
private BufferedOutputStream logFile = null;
public void setLog(String log) {
sendMsg(args);
logFile = (BufferedOutputStream) receiveMsg(moreargs);
}
public String getLog() {
return logFile;
}
If Tomcat fell over that, it must be over 128MB large or maybe double (which is the minimum default memory size of certain Tomcat versions). I don't think that users would value to visit a webpage which is that big. It may feel fast when acting as both server and client at localhost, but it will be up to 100 times slower when served over internets.
Introduce paging/filtering. Query and show just 100 entries at once or so. Add a filter which returns specific results, such as logs of a certain time range or of certain user, etc.
Google also doesn't show all the zillion available results at once in a single webpage, their servers would certainly "fell over" as well :)
Update as per the comment: Is the bean been put in the session scope or so? This way it will indeed accumulate in the memory soon. Streaming is only possible if you have an InputStream at the one side and an OutputStream at the other side. There is no way to convert a String to a stream that way as your casting attempt so that it won't be stored in the Java memroy anymore. The source has to stay at the other side and it has to be retrieved byte by byte over the line. The only feasible approach would be to use an <iframe> whose src points to some HttpServlet which directly streams the data from the source to the response.
Your best bet is likely to store the whole thing in a database or --if it doesn't contain user specific data-- in the application scope and share it among all sessions/requests.
Related
I am developing a web application in java. It has a file that is read on every page request, but I have to think about changing it (which is rare). You need to come up with something to make everything work as quickly as possible. Any advice?
You can use a sort of copy_on_change.
If your file is /path/myFile_v1.txt
Write a code similar to
private AtomicInteger version=1;
public String getFilePath() {
return "/path/myFile_v"+version.get()+".txt"
}
public void synchronized makeAChange() {
// create a new copy of the file with some changes
version.incrementAndGet();
}
you can remove old copies after some time or remove version-2 each time.
Reading threads are not blocked while you make changes.
I am getting CPU performance issue on server when I am trying to download CSV in my project, CPU goes 100% but SQL returns the response within 1 minute. In the CSV we are writing around 600K records for one user it is working fine but for concurrent users we are getting this issue.
Environment
Spring 4.2.5
Tomcat 7/8 (RAM 2GB Allocated)
MySQL 5.0.5
Java 1.7
Here is the Spring Controller code:-
#RequestMapping(value="csvData")
public void getCSVData(HttpServletRequest request,
HttpServletResponse response,
#RequestParam(value="param1", required=false) String param1,
#RequestParam(value="param2", required=false) String param2,
#RequestParam(value="param3", required=false) String param3) throws IOException{
List<Log> logs = service.getCSVData(param1,param2,param3);
response.setHeader("Content-type","application/csv");
response.setHeader("Content-disposition","inline; filename=logData.csv");
PrintWriter out = response.getWriter();
out.println("Field1,Field2,Field3,.......,Field16");
for(Log row: logs){
out.println(row.getField1()+","+row.getField2()+","+row.getField3()+"......"+row.getField16());
}
out.flush();
out.close();
}}
Persistance Code:- I am using spring JDBCTemplate
#Override
public List<Log> getCSVLog(String param1,String param2,String param3) {
String sql =SqlConstants.CSV_ACTIVITY.toString();
List<Log> csvLog = JdbcTemplate.query(sql, new Object[]{param1, param2, param3},
new RowMapper<Log>() {
#Override
public Log mapRow(ResultSet rs, int rowNum)
throws SQLException {
Log log = new Log();
log.getField1(rs.getInt("field1"));
log.getField2(rs.getString("field2"));
log.getField3(rs.getString("field3"));
.
.
.
log.getField16(rs.getString("field16"));
}
return log;
}
});
return csvLog;
}
I think you need to be specific on what you meant by "100% CPU usage" whether it's the Java process or MySQL server. As you have got 600K records, trying to load everything in to memory would easily end up in OutOfMemoryError. Given that this works for one user means that you've got enough heap space to process this number of records for just one user and symptoms surface when there are multiple users trying to use the same service.
First issue I can see in your posted code is that you try to load everything into one big list and the size of the list varies based on the content of the Log class. Using a list like this also means that you have to have enough memory to process JDBC result set and generate new list of Log instances. This can be a major problem with a growing number of users. This type of short-lived objects will cause frequent GC and once GC cannot keep up with the amount of garbage being collected it fails obviously. To solve this major issue my suggestion is to use ScrollableResultSet. Additionally you can make this result set read-only, for example below is code fragment for creating a scrollable result set. Take a look at the documentation for how to use it.
Statement st = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
Above option is suitable if you're using pure JDBC or SpringJDBC template. If Hibernate is already used in your project you can still achieve the same this with the below code fragment. Again please check the documentation for more information and you have a different JPA provider.
StatelessSession session = sessionFactory.openStatelessSession();
Query query = session.createSQLQuery(queryStr).setCacheable(false).setFetchSize(Integer.MIN_VALUE).setReadOnly(true);
query.setParameter(query_param_key, query_paramter_value);
ScrollableResults resultSet = query.scroll(ScrollMode.FORWARD_ONLY);
This way you're not loading all the records to Java process in one go, instead you they're loaded on demand and will have small memory footprint at any given time. Note that JDBC connection will be open until you're done with processing the entire record set. This also means that your DB connection pool can be exhausted if many users are going to download CSV files from this endpoint. You need to take measures to overcome this problem (i.e use of an API manager to rate limit the calls to this endpoint, reading from a read-replica or whatever viable option).
My other suggestion is to stream data which you have already done, so that any records fetched from the DB are processed and sent to client before the next set of records are processed. Again I would suggest you to use a CSV library such as SuperCSV to handle this as these libraries are designed to handle a good load of data.
Please note that this answer may not exactly answer your question as you haven't provided necessary parts of your source such as how to retrieve data from DB but will give the right direction to solve this issue
Your problem in loading all data on application server from database at once, try to run query with limit and offset parameters (with mandatory order by), push loaded records to client and load next part of data with different offset. It help you decrease memory footprint and will not required keep connection to database open all the time. Of course, database will loaded a bit more, but maybe whole situation will better. Try different limit values, for example 5K-50K and monitor cpu usage - on both app server and database.
If you can allow keep many open connection to database #Bunti answer is very good.
http://dev.mysql.com/doc/refman/5.7/en/select.html
I have problem in passing some data from JavaScript to an applet. I think size of data is too big (18M characters in string) to pass it through LiveConnect.
I put code samples below:
JavaScript:
var bigData = generateSomeBigData(18000000); // string contaning 18 000 000 characters
applet.Execute(bigData); // no error
Applet:
public void Execute(String data) {
this.doSomethingWithData(data); // data is null
}
I didn't get any error or exceptions in java console or in javascript code.
I've tried running applet with bigger heap, but it didn't help.
... <param name="java_arguments" value="-Xmx128m" /> ...
The only problem is I get null instead of string contaning data, it doesn't depend on browser (FF, Chrome).
I solved this problem. I moved data generation to server site and I'm passing data to applet using one time self destructing link. Applet can download information, which is no more available, and return result.
Here you have an example:
Server:
String bigData = this.generateBigData(18000000);
String linkToData = this.getOneTimeLink(bigData);
JavaScript:
applet.Execute(linkToBigData);
Applet:
public void Execute(String link) {
String data = this.downloadData(link);
this.doSomethingWithData(data); // data is not null ;)
}
EDIT 11 May 2015:
Maybe you need a small explanation to one time destructing link. I used it because it was another requirement to my project but it is not necessary to achieve solution.
I have a problem which is related to logic than a technology, here is a scenario, (I am using Spring + Hibernate)
I need to read some data from database to return back to page on every get request, but I thought some hack here that what if using some script someone reload page very frequently, this will cause that many calls to server, for this I thought to read data and put them in global variables or class variable, by doing so i end up writing very weird code many global variable and stupid way to give them initial value like for a variable user-status which is a byte type variable I have given -2 as initial value so that my inner logic can understand no value is set for this variable from database, below is my code
#Controller
/* #Secured("hasRole('ROLE_USERS')") */
#RequestMapping("member")
public class ApplyRoles {
#Autowired
private UserInformationForAccessApplication checkUserStatus;
// we will initialize variables to avoid auto-initialize by constructor
private byte userStatus = Constant.IntializationOfGlobalVariable.GLOBALINIT,
requesttype = Constant.IntializationOfGlobalVariable.GLOBALINIT,
access = Constant.IntializationOfGlobalVariable.GLOBALINIT;
Map<String, Object> accessnrole;
Map<String, String> country;
Map<String, String> roleArray;
#Autowired
StudentEnrollmentApplication enrollmentApplication;
#Autowired
SystemProperties systemProperties;
#Autowired
EmployeeEnrollmentApplicationResume employeeEnrollmentApplicationResume;
#Autowired
AccessEnrollmentProcessing accessEnrollmentProcessing;
private String role = Constant.IntializationOfGlobalVariable.ROLENOTSET,
fname, lname;
#RequestMapping(value = "/user", method = RequestMethod.GET)
public String checkingUserStatus(Model model, HttpSession session,
Authentication authentication) {
String sessionemail = "yashprit#gmail.com";// (String) session
// .getAttribute(Constant.SessionAttributes.LOGGEDINUSER);
// first check global value, if found set than don't fetch from database
if (userStatus == Constant.IntializationOfGlobalVariable.GLOBALINIT) {
// get user status from MySQL Database
userStatus = checkUserStatus.checkStatus(sessionemail).get(0);
if (!(userStatus == Constant.UserRoleApplicationStatus.NOTAPPLIED)) {
access = checkUserStatus.checkStatus(sessionemail).get(1);
model.addAttribute(Constant.SystemName.ACCESS, access);
}
}
if (!(userStatus >= Constant.UserRoleApplicationStatus.NOTAPPLIED || userStatus <= Constant.UserRoleApplicationStatus.REJECTED)) {
model.addAttribute("error", "User status is not avaible");
return "redirect:error/pagenotfound";
} else if (userStatus == Constant.UserRoleApplicationStatus.NOTAPPLIED) {
if (requesttype == Constant.IntializationOfGlobalVariable.GLOBALINIT) {
// get request type from MongoDB database
requesttype = checkUserStatus.getRequestType(sessionemail);
}
if (!(requesttype == Constant.RequestType.NORMALEBIT || requesttype == Constant.RequestType.INVITEBIT)) {
model.addAttribute("error",
"Facing Technichal Issue, Please try again");
return "redirect:error/pagenotfound";
}
if (requesttype == Constant.RequestType.INVITEBIT) {
if (!(Byte.parseByte((String) accessnrole
.get(Constant.SystemName.ACCESS)) == Constant.Access.USERBIT)) {
accessnrole = checkUserStatus
.getAccessAndRole(sessionemail);
}
if (accessnrole.get(Constant.SystemName.ACCESS).equals(
Constant.Database.ERRORMESSAGE)
|| accessnrole.get(Constant.SystemName.ROLE).equals(
Constant.Database.ERRORMESSAGE)) {
model.addAttribute("error",
"Facing Technichal Issue, Please try again");
return "redirect:error/pagenotfound";
}
model.addAttribute(Constant.SystemName.ACCESSNROLE, accessnrole);
model.addAttribute(Constant.SystemName.REQUESTTYPE, requesttype);
}
}
model.addAttribute(Constant.SystemName.USERSTATUS, userStatus);
return "member/user";
}
}
to avoid global variable i thought of suing cookies, because I don't want to call database on every page reload in same session, once its loaded for a session than I don't have to call to database.
Anything that can help to to redesign above part of code is much appreciated
thanks
There are really 2 things that you are considering, and correctly me if I'm wrong, but:
Caching on the server (in your Java application) to avoid doing a database lookup multiple times for the same data.
Avoid the client (browser) from sending multiple requests to the server.
The first can be resolved using caching which is available in spring uses annotations on any given method. The documentation is available here.
The second is a bit more tricky and I' leave it for now unless you discover a performance problem. It's again possible to do in Spring and takes advantage of the HTTP protocol and caching controls available in the HTTP header to inform the browser how long to cache responses.
What you are thinking about is called a "cache". It is a standard Computer Science way of doing things and they have been doing research on how to use caches for as long as there have been computers.
You might want to go do some reading on the subject. I found this one by Googling "cache tutorial java" http://javalandscape.blogspot.com/2009/01/cachingcaching-algorithms-and-caching.html
In simplest terms (a one item cache) what you want is to store some data object that you recently took some time to come up with. But you also have to have some sort of identifier so you can tell if the next request is asking for the same data. If it isn't, you have to do all the work over. If it is the same data, you just return it again.
So the algorithm works something like this in this simple case:
if (storedData != null && storedRequestInfo == userRequest.requestInfo) {
return storedData;
}
storedData = youCalculateTheRequestedData();
storedRequestInfo = userRequest.requestInfo;
return storedData;
Its not any real programming language, just something to show you how it works.
The requestInfo is whatever comes in with the request that you use to look up your database stuff. You save it in storedRequestInfo after any calculation.
This shows it as returning some data to the user, that's what is in storedData.
It's a simple, one-element cache.
(To expand on this, you can store the storedRequestInfo and storedData in the session and you end up with one of these stored for each user. You can also use a java Map and store a bunch of storedData. The problem is to decide how to limit your memory use. If you store too many of these for each user, you use up too much memory. So you limit how many each user can have either by size or by count. Then you have to decide which one to delete when it gets too big. In the simple case, you always delete, in essence, the stored one and store a new one.
I noticed your comment. ECache is just a big fancy Map in the terms I used above. I don't know if it's naturally session dependent but it can be made that way by adding the session id to the cache key.)
I'm successfully using Play 1.2.4 to serve large binary file downloads to users using the renderBinary() method.
I'd like to have a hint of when the user actually completes the download. Generally speaking, I know this is somewhat possible as I've done it before. In an old version of my website, I wrote a simple servlet that served up binary file downloads. Once that servlet finished writing out the contents of the file, a notification was sent. Certainly not perfect, but useful nonetheless. In my testing, it did provide an indication of how long the user took to download a file.
Reviewing the Play source, I see that the play.mvc.results.RenderBinary class has a handy apply() method that I could use. I wrote my own version of RenderBinary so I could send the notification after the apply() method finished writing out the file contents.
The problem I found is that calls to response.out.write() obviously cache the outgoing bytes (via Netty?), so even though I am writing out several megabytes of data, the calls to play.mvc.Http.Response.out.write() complete in seconds, even though it takes the downloader a couple minutes to download the file.
I don't mind writing custom classes, although I'd prefer to use a stock Play 1.2.4 distribution.
Any ideas on how to get a notification of when the end of a file download is pushed out towards the user's browser?
It seems this may help you, as it tackles a somehow similar problem:
Detect when browser receives file download
I'm not sure you'll eb able to do it via renderBinary nor an #After annotation in the controller. Some browser-side detection of the download and then a notification to the server (pinging the download's end) would work.
There may be an alternative: using WebSockets (streaming the file via the socket and then having teh client to answer) but it may be overkill for this :)
you can use ArchivedEventStream.
first create a serializable ArcivedEventStream class..
public class Stream<String> extends ArchivedEventStream<String> implements Serializable{
public Stream(int arg0) {
super(arg0);
}
}
then on your controller...
public static void downloadPage(){
Stream<String> userStream = Cache.get(session.getId(),Stream.class);
if( userStream == null){
userStream = new Stream<String>(5);
Cache.add(session.getId(), userStream);
}
render();
}
public static void download(){
await(10000);// to provide some latency. actually no needed
renderBinary(Play.getFile("yourfile!"));
}
public static void isDownloadFinished(){
Stream<String> userStream = Cache.get(session.getId(),Stream.class);
List<IndexedEvent<String>> list = await(userStream.nextEvents(0));
renderJSON(list.get(0).data);
}
#After(only="download")
static void after(){
Stream<String> userStream = Cache.get(session.getId(),Stream.class);
userStream.publish("ok");
}
on your html...
#{extends 'main.html' /}
#{set title:'downloadPage' /}
download
<script>
$(document).ready(function(){
$.ajax('/application/isDownloadFinished',{success:function(data){
if(data){
console.log("downloadFinished");
}
}});
});
</script>
when your download finished, the original page will retrieve the notification.
This code is just a sample. You could read the api of ArchivedEventStream and make your own implementation..