i'm trying to develop a client-server chat application using java servlets and mysql(innoDB engine) and jetty server. i tested the connection code with 100 simulated users hitting the server at once using jmeter but i got 40 secs as average time :( for all of them to get connected with min time taken by thread( 2 secs ) and max time( 80 secs). My connection database table has the followng structure two columns connect(user,stranger) and my servlet code is shown below.I'm using innoDB engine for row level locking.I also used explicit write lock SELECT...... FOR UPDATE inside transaction.I'm looping the transaction if it rollbacks due to deadlock until it executes atleast once.Once two users get connected they update their stranger's column with eachother's randomly generated unique number.
i'm using c3p0 connection pooling with min 100 threads open and jetty with min 100 threads.
please help me to identify the bottle necks or tools needed to find them.
import java.io.*;
import java.util.*;
import java.sql.*;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.naming.*;
import javax.sql.*;
public class connect extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse res)
throws java.io.IOException {
String unumber=null;
String snumber=null;
String status=null;
InitialContext contxt1=null;
DataSource ds1=null;
Connection conxn1=null;
PreparedStatement stmt1=null;
ResultSet rs1=null;
PreparedStatement stmt2=null;
InitialContext contxt3=null;
DataSource ds3=null;
Connection conxn3=null;
PreparedStatement stmt3=null;
ResultSet rs3=null;
PreparedStatement stmt4=null;
ResultSet rs4=null;
PreparedStatement stmt5=null;
boolean checktransaction = true;
unumber=req.getParameter("number"); // GET THE USER's UNIQUE NUMBER
try {
contxt1 = new InitialContext();
ds1 =(DataSource)contxt1.lookup("java:comp/env/jdbc/user");
conxn1 = ds1.getConnection();
stmt1 = conxn1.prepareStatement("SELECT * FROM profiles WHERE number=?"); // GETTING USER DATA FROM PROFILE
stmt1.setString(1,unumber);
rs1 = stmt1.executeQuery();
if(rs1.next()) {
res.getWriter().println("user found in PROFILE table.........");
uage=rs1.getString("age");
usex=rs1.getString("sex");
ulocation=rs1.getString("location");
uaslmode=rs1.getString("aslmode");
stmt1.close();
stmt1=null;
conxn1.close();
conxn1 = null;
contxt3 = new InitialContext();
ds3 =(DataSource)contxt3.lookup("java:comp/env/jdbc/chat");
conxn3 = ds3.getConnection();
conxn3.setAutoCommit(false);
while(checktransaction) {
// TRANSACTION STARTS HERE
try {
stmt2 = conxn3.prepareStatement("INSERT INTO "+ulocation+" (user,stranger) VALUES (?,'')"); // INSERTING RECORD INTO LOCAL CHAT TABLE
stmt2.setString(1,unumber);
stmt2.executeUpdate();
stmt2.close();
stmt2 = null;
res.getWriter().println("inserting row into LOCAL CHAT TABLE.........");
System.out.println("transaction starting........."+unumber);
stmt3 = conxn3.prepareStatement("SELECT user FROM "+ulocation+" WHERE (stranger='' && user!=?) LIMIT 1 FOR UPDATE");
stmt3.setString(1,unumber); // SEARCHING FOR STRANGER
rs3=stmt3.executeQuery();
if (rs3.next()) { // stranger found
stmt4 = conxn3.prepareStatement("SELECT stranger FROM "+ulocation+" WHERE user=?");
stmt4.setString(1,unumber); //CHECKING FOR USER STATUS BEFORE CONNECTING TO STRANGER
rs4=stmt4.executeQuery();
if(rs4.next()) {
status=rs4.getString("stranger");
}
stmt4.close();
stmt4=null;
if(status.equals("")) { // user status is also null
snumber = rs3.getString("user");
stmt5 = conxn3.prepareStatement("UPDATE "+ulocation+" SET stranger=? WHERE user=?"); // CONNECTING USER AND STRANGER
stmt5.setString(1,snumber);
stmt5.setString(2,unumber);
stmt5.executeUpdate();
stmt5.setString(2,snumber);
stmt5.setString(1,unumber);
stmt5.executeUpdate();
stmt5.close();
stmt5=null;
}
} // end of stranger found
stmt3.close();
stmt3 = null;
conxn3.commit(); // TRANSACTION ENDING
checktransaction = false;
} // END OF TRY INSIDE WHILE
catch(java.sql.SQLTransactionRollbackException e) {
System.out.println("transaction restarted......."+unumber);
counttransaction = counttransaction+1;
}
} //END OF WHILE LOOP
conxn3.close();
conxn3 = null;
} // END OF USER FOUND IN PROFILE TABLE
} // end of try
catch(java.sql.SQLException sqlexe) {
try {conxn3.rollback();}
catch(java.sql.SQLException exe) {conxn3=null;}
sqlexe.printStackTrace();
res.getWriter().println("UNABE TO GET CONNECTION FROM POOL!");
}
catch(javax.naming.NamingException namexe) {
namexe.printStackTrace();
res.getWriter().println("DATA SOURCE LOOK UP FAILED!");
}
}
}
How many users do you have? Can you load them all into memory first and do a memory lookup?
If you separate you DB layer from your presentation layer, this is something you can change without changing the servlet (as it shouldn't care where the data comes from)
If you use Java memory it shouldn't take more than a 20 ms per user.
Here is a test which creates one million profiles in memory, looks them up and creates chat entries, which is removed later. The average time per operation was 640 ns (nano-seconds, or billionths of a second)
import java.util.LinkedHashMap;
import java.util.Map;
public class Main {
public static void main(String... args) {
UserDB userDB = new UserDB();
// add 1000,000 users
for (int i = 0; i < 1000000; i++)
userDB.addUser(
new Profile(i,
"user+i",
(short) (18 + i % 90),
i % 2 == 0 ? Profile.Sex.Male : Profile.Sex.Female,
"here", "mode"));
// lookup a users and add a chat session.
long start = System.nanoTime();
int operations = 0;
for(int i=0;i<userDB.profileCount();i+=2) {
Profile p0 = userDB.getProfileByNumber(i);
operations++;
Profile p1 = userDB.getProfileByNumber(i+1);
operations++;
userDB.chatsTo(i, i+1);
operations++;
}
for(int i=0;i<userDB.profileCount();i+=2) {
userDB.endChat(i);
operations++;
}
long time = System.nanoTime() -start;
System.out.printf("Average lookup and update time per operation was %d ns%n", time/operations);
}
}
class UserDB {
private final Map<Long, Profile> profileMap = new LinkedHashMap<Long, Profile>();
private final Map<Long, Long> chatsWith = new LinkedHashMap<Long, Long>();
public void addUser(Profile profile) {
profileMap.put(profile.number, profile);
}
public Profile getProfileByNumber(long number) {
return profileMap.get(number);
}
public void chatsTo(long number1, long number2) {
chatsWith.put(number1, number2);
chatsWith.put(number2, number1);
}
public void endChat(long number) {
Long other = chatsWith.get(number);
if (other == null) return;
Long number2 = chatsWith.get(other);
if (number2 != null && number2 == number)
chatsWith.remove(other);
}
public int profileCount() {
return profileMap.size();
}
}
class Profile {
final long number;
final String name;
final short age;
final Sex sex;
final String location;
final String aslmode;
Profile(long number, String name, short age, Sex sex, String location, String aslmode) {
this.number = number;
this.name = name;
this.age = age;
this.sex = sex;
this.location = location;
this.aslmode = aslmode;
}
enum Sex {Male, Female}
}
prints
Average lookup and update time per operation was 636 ns
If you need this to be faster you could look at using Trove4j which could be twice as fast in this case. Given this is likely to be fast enough, I would try to keep things simple.
Have you considered caching reads and batching writes?
I'm not sure how you can realistically expect anyone to determine where the bottle-necks are by merely looking at the source code.
To find the bottlenecks, you should run your app and the load test with a profiler attached, such as JVisualVM or YourKit or JProfiler. This will tell you exactly how much time is spent in each area of the code.
The only thing that anyone can really critique from looking at your code is the basic architecture:
Why are you looking up the DataSource on each doGet()?
Why are you using transactions for what appears to be unrelated database insertions and queries?
Is using a RDBMS to back a chat system really the best idea in the first place?
If your response times are so high, you need to properly index your db tables. Based on the times you provided I will assume this was not done.You need to speed up your read and writes.
Look up Execution Plans and how to read them. An execution plan will show you if/when indexes are being used with your queries; if you are performing seeks or scans etc on the tables. by using these, you can tweak your query/indexes/tables to be more optimal.
As others have stated, RDMS wont be your best option in large scale applications, but since you are just starting out it should be ok until you learn more.
Learn to properly setup those tables and you should see your deadlock counts and response times go down
Related
I have just edited my previous question, and I am providing more details, (hopefully someone would be able to help).
I have a Redis cluster with 1 master and 2 slaves. All 3 nodes are managed by Sentinel. The failover works fine and when the new master is elected, I can write on the new master (from the command line).
Now, I am trying to write a small Java program using Redisson, which ideally should write records into redis, and be able to handle the failover (which it should do as far as I have understood). This is my code until now.
import org.redisson.Redisson;
import org.redisson.RedissonNode;
import org.redisson.api.*;
import org.redisson.api.annotation.RInject;
import org.redisson.config.Config;
import org.redisson.config.RedissonNodeConfig;
import org.redisson.config.SubscriptionMode;
import java.util.Collections;
import java.util.UUID;
public class RedissonTest {
public static class RunnableTask implements Runnable {
#RInject
RedissonClient client;
#Override
public void run(){
System.out.println("I am in ..");
RMap<String, String> map = client.getMap("completeNewMap");
System.out.println("is thread interrupted?? " + Thread.currentThread().isInterrupted());
NodesGroup ngroup = client.getNodesGroup();
Collection<Node> nodes = ngroup.getNodes();
for(Node node : nodes){
System.out.println("Node ip "+ node.getAddr().toString()+" type: "+node.getType().toString());
}
for(int i=0; i < 10000; i++) {
String key = "bg_key_"+String.valueOf(i);
String value = String.valueOf(UUID.randomUUID());
String oldVal = map.get(key);
map.put(key, value);
RBucket<String> bck = client.getBucket(key);
bck.set(value);
System.out.println("I am going to replace the old value " + oldVal + " with new value " + value + " at key "+key);
}
System.out.println("I am outta here!!");
}
}
public static void main(String[] args) {
Config config = new Config();
config.useSentinelServers()
.setMasterName("redis-cluster")
.addSentinelAddress("192.168.56.101:26379")
.addSentinelAddress("192.168.56.102:26379")
.addSentinelAddress("192.168.56.103:26379")
.setPingTimeout(100)
.setTimeout(60000)
.setRetryAttempts(25)
.setReconnectionTimeout(45000)
.setRetryInterval(1500)
.setReadMode(ReadMode.SLAVE)
.setConnectTimeout(20000)
.setSubscriptionMode(SubscriptionMode.MASTER);
RedissonClient client = Redisson.create(config);
RedissonNodeConfig nodeConfig = new RedissonNodeConfig(config);
nodeConfig.setExecutorServiceWorkers(Collections.singletonMap("myExecutor6", 1));
RedissonNode node = RedissonNode.create(nodeConfig);
node.start();
System.out.println("Node address "+node.getRemoteAddress().toString());
RExecutorService e = client.getExecutorService("myExecutor6");
e.execute(new RunnableTask());
e.shutdown();
if(e.isShutdown()) {
e.delete();
}
client.shutdown();
node.shutdown();
System.out.println("Hello World!" );
}
}
Running the code, a couple of things that I don't understand happen.
The first one is:
why redisson recognise my 3 hosts as redis slaves??
why the key value pairs I created are not stored into redis??
The idea is that after I have been able to write into redis, I would start to test the failover killing the master and expecting that the program will manage it and continues to write to the new master, without losing a message(it would be nice to be able to cache the messages while the failover occurs).
What happen with this simple program is that I can write into redis, but when I kill the master, the execution just hangs for a time that seems to be close to the setTimeout and exits without completing the task.
Any suggestion?
You should set retryAttempts parameter big enough to make Redisson survive failover period.
I am running GWT RPC calls to a GAE server, querying for article objects that are stored in the datastore with JDO, and I am paginating the results by using cursors.
I send an initial RPC call to start the pagination with a "range" of 10 results. I store the query cursor in the memcache, and retrieve it when the user requests for the next page of 10 results. The code that implements this is shown below.
The range is always the same, 10 results. However, some subsequent RPC calls return 2 results, or 12 results. It is very inconsistent. The calls also sometimes return duplicate results.
I have read this Google developers documentation: https://developers.google.com/appengine/docs/java/datastore/queries#Java_Limitations_of_cursors. It mentions that: "Cursors don't always work as expected with a query that uses an inequality filter or a sort order on a property with multiple values. The de-duplication logic for such multiple-valued properties does not persist between retrievals, possibly causing the same result to be returned more than once."
As you can see in the code, I am sorting on a "date" property. This property only has one value.
Can you let me see what I am doing wrong here. Thanks.
This is the code that executes the RPC call on the GAE server:
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.List;
import java.util.logging.Logger;
import javax.jdo.PersistenceManager;
import javax.jdo.Query;
import javax.servlet.http.HttpSession;
import com.google.appengine.api.datastore.Cursor;
import com.google.appengine.datanucleus.query.JDOCursorHelper;
import com.google.gwt.user.server.rpc.RemoteServiceServlet;
//...
private void getTagArticles(String tag, int range, boolean start) {
PersistenceManager pm = PMF.getNonTxnPm();
ArticleStreamItemSummaryDTO aDTO = null;
ArticleStreamItem aDetached = null;
summaryList = new ArrayList<ArticleStreamItemSummaryDTO>();
String cursorString = null;
session = getThreadLocalRequest().getSession();
UserAccount currentUser = LoginHelper.getLoggedInUser(session, pm);
String cursorID = currentUser.getId().toString() + tag;
if (start) { // The start or restart of the query
CacheSupport.cacheDelete(String.class.getName(), cursorID);
}
Object o = CacheSupport.cacheGet(String.class.getName(), cursorID);
if (o != null && o instanceof String) {
cursorString = (String) o;
}
Query q = null;
try {
q = pm.newQuery(ArticleStreamItem.class);
if (cursorString != null) {
Cursor cursor = Cursor.fromWebSafeString(cursorString);
Map<String, Object> extensionMap = new HashMap<String, Object>();
extensionMap.put(JDOCursorHelper.CURSOR_EXTENSION, cursor);
q.setExtensions(extensionMap);
}
q.setFilter("tag == tagParam");
q.declareParameters("String tagParam");
q.setOrdering("date desc");
q.setRange(0, range);
#SuppressWarnings("unchecked")
List<ArticleStreamItem> articleStreamList = (List<ArticleStreamItem>) q.execute(tag);
if (articleStreamList.iterator().hasNext()) {
Cursor cursor = JDOCursorHelper.getCursor(articleStreamList);
cursorString = cursor.toWebSafeString();
CacheSupport.cacheDelete(String.class.getName(), cursorID);
CacheSupport.cachePutExp(String.class.getName(), cursorID, cursorString, CACHE_EXPIR);
for (ArticleStreamItem a : articleStreamList) {
aDetached = pm.detachCopy(a);
aDTO = aDetached.buildSummaryItem();
summaryList.add(aDTO);
}
}
}
catch (Exception e) {
// e.printStackTrace();
logger.warning(e.getMessage());
}
finally {
q.closeAll();
pm.close();
}
}
The code snippet that I provided in the question above actually works well. The problem arose from the client side. The RPC calls were sometimes being made a few milliseconds from each other, and that was creating the inconsistent behavior that I was seeing in the results returned.
I changed the client side code to make a single RPC call every 5 seconds, and that fixed it.
Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.
I have a list of games in GAE datastore and I want to query fixed number of them, starting from a certain offset, i.e. get next 25 games starting form entry with id "75".
PersistenceManager pm = PMF.get().getPersistenceManager(); // from Google examples
Query query = pm.newQuery(Game.class); // objects of class Game are stored in datastore
query.setOrdering("creationDate asc");
/* querying for open games, not created by this player */
query.setFilter("state == Game.STATE_OPEN && serverPlayer.id != :playerId");
String playerId = "my-player-id";
List<Game> games = query.execute(playerId); // if there's lots of games, returned list has more entries, than user needs to see at a time
//...
Now I need to extend that query to fetch only 25 games and only games following entry with id "75". So the user can browse for open games, fetching only 25 of them at a time.
I know there's lots of examples for GAE datastore, but those all are mostly in Python, including sample code for setting limit on the query.
I am looking for a working Java code sample and couldn't find one so far.
It sounds like you want to facilitate paging via Query Cursors. See: http://code.google.com/appengine/docs/java/datastore/queries.html#Query_Cursors
From the Google doc:
public class ListPeopleServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
Query q = new Query("Person");
PreparedQuery pq = datastore.prepare(q);
int pageSize = 15;
resp.setContentType("text/html");
resp.getWriter().println("<ul>");
FetchOptions fetchOptions = FetchOptions.Builder.withLimit(pageSize);
String startCursor = req.getParameter("cursor");
// If this servlet is passed a cursor parameter, let's use it
if (startCursor != null) {
fetchOptions.startCursor(Cursor.fromWebSafeString(startCursor));
}
QueryResultList<Entity> results = pq.asQueryResultList(fetchOptions);
for (Entity entity : results) {
resp.getWriter().println("<li>" + entity.getProperty("name") + "</li>");
}
resp.getWriter().println("</ul>");
String cursor = results.getCursor().toWebSafeString();
// Assuming this servlet lives at '/people'
resp.getWriter().println(
"<a href='/people?cursor=" + cursor + "'>Next page</a>");
}
}
Thanks everyone for help. The cursors was the right answer.
The thing is that I am pretty much stuck with JDO and can't use DatastoreService, so I finally found this link:
http://code.google.com/appengine/docs/java/datastore/jdo/queries.html#Query_Cursors
I'm running into the following (common) error after I added a new DB table, hibernate class, and other classes to access the hibernate class:
java.lang.OutOfMemoryError: Java heap space
Here's the relevant code:
From .jsp:
<%
com.companyconnector.model.HomepageBean homepage = new com.companyconnector.model.HomepageBean();
%>
From HomepageBean:
public class HomepageBean {
...
private ReviewBean review1;
private ReviewBean review2;
private ReviewBean review3;
public HomepageBean () {
...
GetSurveyResults gsr = new GetSurveyResults();
List<ReviewBean> rbs = gsr.getRecentReviews();
review1 = rbs.get(0);
review2 = rbs.get(1);
review3 = rbs.get(2);
}
From GetSurveyResults:
public List<ReviewBean> getRecentReviews() {
List<OpenResponse> ors = DatabaseBean.getRecentReviews();
List<ReviewBean> rbs = new ArrayList<ReviewBean>();
for(int x = 0; ors.size() > x; x =+ 2) {
String employer = "";
rbs.add(new ReviewBean(ors.get(x).getUid(), employer, ors.get(x).getResponse(), ors.get(x+1).getResponse()));
}
return rbs;
}
and lastly, from DatabaseBean:
public static List<OpenResponse> getRecentReviews() {
SessionFactory session = HibernateUtil.getSessionFactory();
Session sess = session.openSession();
Transaction tx = sess.beginTransaction();
List results = sess.createQuery(
"from OpenResponse where (uid = 46) or (uid = 50) or (uid = 51)"
).list();
tx.commit();
sess.flush();
sess.close();
return results;
}
Sorry for all the code and such a long message, but I'm getting over a million instances of ReviewBean (I used jProfiler to find this). Am I doing something wrong in the for loop in GetSurveyResults? Any other problems?
I'm happy to provide more code if necessary.
Thanks for the help.
Joe
Using JProfiler to find which objects occupy the memory is a good first step. Now that you know that needlessly many instances are created, a logical next analysis step is to run your application in debug mode, and step through the code that allocates the ReviewBeans. If you do that, the bug should be obvious. (I am pretty sure I spotted it, but I'd rather teach you how to find such bugs on your own. It's a skill that is indispensable for any good programmer).
Also you probably want to close session/commit transaction if the finally block to make sure it's always invoked event if your method throws exception. Standard pattern for working with resources in java (simplified pseudo code):
Session s = null;
try {
s = openSession();
// do something useful
}
finally {
if (s != null) s.close();
}