I have installed jProfiler in my Linux machine and I am saving the data into .jps file. I am then loading this file into jProfiler UI in my local machine.
Here is my config file:
<?xml version="1.0" encoding="UTF-8"?>
<config>
<nextId id="104" />
<generalSettings setupHasRun="false">
<recordingProfiles>
<recordingProfile id="10" name="CPU recording">
<actionKey id="cpu"/>
</recordingProfile>
</recordingProfiles>
</generalSettings>
<templates>
<template id="50" name="Instrumentation, all features supported" startFrozen="false" recordCPUOnStartup="false" vmCannotExit="false" instrumentationType="1" samplingNoFilters="false" lineNumbers="false" samplingFrequency="5" timeType="1" disableCPUProfiling="false" recordAllocOnStartup="true" recordArrayAlloc="true" enableTriggersOnStartup="true" allocTreeRecordingType="1" disableMonitorContentions="false" componentDetection="true" chronoHeap="false" autoUpdatePeriodLong="5" autoUpdatePeriodShort="2" allUrls="false" payloadCap="50" eventCap="20000" showSystemThreads="false" utilConcurrentHandling="true" libraryDebugParameters="" exceptionalCap="5" exceptionalTimeType="4" autoTuneInstrumentation="true" autoTuneMaxAverage="100" autoTuneMinPerMille="10" samplingPayloadCallStacks="true" description="This is JProfiler's fully featured mode. In this setting, call stack information is accurate, but CPU overhead and distortion of measured call times may be high, depending on your filter settings. You should define inclusive filters for your own packages." system="true" />
<template id="51" name="Sampling for CPU profiling, some features not supported" startFrozen="false" recordCPUOnStartup="false" vmCannotExit="false" instrumentationType="3" samplingNoFilters="false" lineNumbers="false" samplingFrequency="5" timeType="1" disableCPUProfiling="false" recordAllocOnStartup="true" recordArrayAlloc="true" enableTriggersOnStartup="true" allocTreeRecordingType="1" disableMonitorContentions="false" componentDetection="true" chronoHeap="false" autoUpdatePeriodLong="5" autoUpdatePeriodShort="2" allUrls="false" payloadCap="50" eventCap="20000" showSystemThreads="false" utilConcurrentHandling="true" libraryDebugParameters="" exceptionalCap="5" exceptionalTimeType="4" autoTuneInstrumentation="true" autoTuneMaxAverage="100" autoTuneMinPerMille="10" samplingPayloadCallStacks="true" description="This template is particularly suitable for CPU profiling and for memory profiling when accurate allocation information is not important. Sampling has a very low overhead and does not distort measured call tines. Some views, like the method statistics are no available. JEE payloads cannot be annotated in the call tree, but payload hotspots without backtraces are available." system="true" />
</templates>
<sessions>
<session id="80" templateId="50" name="Animated Bezier Curve Demo" type="1" isStartupWorkingDirectory="true" mainClass="bezier.BezierAnim">
<filters>
<filter type="inclusive" name="com." />
</filters>
<exceptionalMethods/>
<classPath>
<classPathEntry path="demo/bezier/classes" />
</classPath>
<sourcePath>
<sourcePathEntry path="demo/bezier/src" />
</sourcePath>
<probes>
<probe name="com.jprofiler.agent.probe.interceptor.TrackingInterceptor" enabled="true" startProbeRecording="false" events="false" annotatePayloads="false">
<id value="3" />
</probe>
</probes>
</session>
The problem which I am facing is that I am not able to get any details regarding method statistics under the CPU views tab in jProfiler UI.
But I am able to get other fields in Telemetrics.
The version in use: JProfiler 9.1 and I have used sample config.xml to start with my test. DO i need to make any changed in my config file to get the method level statistics in my .jps file
Method statistics is recorded separately, because the overhead is too high to always be recorded together with CPU data.
When the session is live, go the method statistics view and enabled recording. For offline profiling, there is a trigger action that starts method statistics recording.
Related
I am using Apache Ignite 2.6. I am using Ignite Filesystem, and when I write a specific file, which is about 25 MB, to IGFS, over and over, the data is not saved into the non-heap space. Instead, it goes into heap, which is subject to Garbage Collection, and it is relatively slow. How do I get IGFS to save a file into the large heap space I have allocated for it?
High level architecture--I have a client ignite node running inside of a tomcat for now, and a server ignite node, on which I intend this data to be stored. Scaling can occur once I get this working as expected--but it is very slow because of the aforementioned problem. It also OOMs when it runs out of heap space very quickly. Thing is, I want it to use the 30G of NON HEAP space I have allocated!
I intend this it to be an in memory cache. I am allocating 2 G of heap space and 30G of non heap space to the JVM. The non heap space never gets used and it runs out of memory as a result. I have confirmed that the non-heap space is not used using the JMX Console Memory tab--non heap space stays well below 100M, while heap space quickly balloons to 2G and then the JVM crashes.
The details: First, my ignite configuration (spring xml):
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
<property name="searchSystemEnvironment" value="true"/>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="marshaller">
<bean class="org.apache.ignite.internal.binary.BinaryMarshaller" />
</property>
<property name="fileSystemConfiguration">
<list>
<bean class="org.apache.ignite.configuration.FileSystemConfiguration">
<property name="name" value="igfs"/>
<property name="blockSize" value="#{128 * 1024}"/>
<property name="perNodeBatchSize" value="512"/>
<property name="perNodeParallelBatchCount" value="16"/>
<property name="prefetchBlocks" value="32"/>
</bean>
</list>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration" >
<!-- if I don't set this, the system region runs out of memory almost immediately -->
<property name="systemRegionMaxSize" value="#{6L * 1024 * 1024 * 1024"} />
<property name="systemRegionInitialSize" value="#{6L * 1024 * 1024 * 1024"} />
</bean>
</property>
</bean>
Here is the script I use to start up my ignite server process. It's a shell script running on a Linux machine with 64 G RAM and 40 G disk space.
IGNITE_HOME=/data/apache-ignite
export IGNITE_HOME
IGNITE_JMX_PORT=1234
export IGNITE_JMX_PORT
$IGNITE_HOME/bin/ignite.sh $IGNITE_HOME/ignite-media-server.xml -J-Xmx2G -J-Xms2G -J-XX::+HeapDumpOnOutOfMemoryError -J-XX:HeapDumpPath=$IGNITE_HOME -J-XX:+PrintGC -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps -J-Xloggc:$IGNITE_HOME/gc.log-$(date +%m%d-%H%M%S) -J-XX:+UseG1GC -J-XX:DisableExplicitGC -J-XX:MaxDirectMemorySize=30G
This is the code that creates my client igfs object, through which I save files to ignite. They tend to be on the large side.
public void init() throws Exception{
igniteInstanceName = "client-name=" + hostInfo.getLocalHost();
Ignition.setClientMode(true);
// reading in the same config file as the server uses to start up above. The big difference is the clientMode set to true here.
try(InputStream configFileInputStream = new FileInputStream(ResourceUtils.getFile("ignite-media-server.xml"));){
ignite = IgnitionEx.start(configFileInputStream, igniteInstanceName, null, null);
igfs = ignite.fileSystem("igfs");
}
catch(Throwable t){ /* do log */}
}
Here is a save method, that saves my files to ignite:
public saveStream(String cachePath, AudioInputStream toCache){
OutputStream os = null;
try{
IgfsPath cacheFile = new IgfsPath(cachePath);
os = igfs.create(cacheFile, true);
AudioSystem.write(toCache.getDataStream, AudioFileFormat.TYPE.WAVE, os);
}
finally{
// close streams
}
}
Why doesn't my data get saved to the speedy off-heap space? What am I missing? my server.config comes almost straight from the igfs provided example.
In other confusion, when I use ignitevisor.cmd to inspect memory usage on the server node before and after a shorter test (that doesn't make it crash) I see the following:
Look at memory allocation while ignite is empty in ignitevisor.cmd. See that my igfs region says:
Heap Memory Initialized: 2g
Heap Memory Used: 56mb
Non-heap memory initialized: 2mb
Non Heap memory used: 49 mb
Non heap memory maximum: 744mb
Create JUST SHY 2 G worth of files saved in IGFS--just short of an OOM since from bitter experience I know it will blow up shortly. Use ignitevisor.cmd to look at the memory allocation of the nodes. This is what ... – MeowCode 2 mins ago
Heap memory initialized: 2gb
Heap memory used: 1gb
Non Heap memory used 64 MB
Non heap memory maximum: 744mb
Why is there still almost nothing in non-heap? And why does ignitevisor think that the non-heap maximum is 744 MB when it should be 30 GB?
In other points of interest, if I increase my heap size to 6 GB, it runs longer, but still the server crashes with an "OutOfMemoryError:Java heap space". Interestingly, I can reproduce this even when I enable disk persistence. Inspecting the heap dump file reveals a lot of ConcurrentLinkedHashMap entries. The entries themselves are "org.apache.ignite.internal.GridTopic" objects. Each one has a uuid and most appear to be of type TOPIC_DATASTREAM.
Data is saved to Off-Heap all right, but you should be aware that a lot of transient objects involved in IGFS operation will still be briefly held on heap (and GCed after that).
"JMX Console Memory tab--non heap space" is the wrong metric. I don't think that there are any JVM metrics for Off-Heap. However, Ignite will print Off-Heap statistics at regular intervals.
Why you would run out of memory is not obvious. Have you tried collecting heap dump and analyzing it?
So We have a problem where a penetration checker being run for something like 12 hours is causing Jgroups to disconnect, the slave doesn't rejoin the cluster, split brain, some other issues that represent the lack of replication, and it doesn't recover.
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.6.xsd">
<TCP bind_addr="NON_LOOPBACK"
bind_port="${infinispan.jgroups.bindPort}"
enable_diagnostics="false"
thread_naming_pattern="pl"
send_buf_size="640k"
sock_conn_timeout="300"
thread_pool.min_threads="${jgroups.thread_pool.min_threads:2}"
thread_pool.max_threads="${jgroups.thread_pool.max_threads:30}"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="false"
internal_thread_pool.min_threads="${jgroups.internal_thread_pool.min_threads:5}"
internal_thread_pool.max_threads="${jgroups.internal_thread_pool.max_threads:20}"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="500"
oob_thread_pool.min_threads="${jgroups.oob_thread_pool.min_threads:20}"
oob_thread_pool.max_threads="${jgroups.oob_thread_pool.max_threads:200}"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
/>
<TCPPING async_discovery="true"
initial_hosts="${infinispan.jgroups.tcpping.initialhosts}"
port_range="1"/>
/>
<MERGE3 min_interval="10000"
max_interval="30000"
/>
<FD_SOCK />
<FD />
<VERIFY_SUSPECT />
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="100"
resend_last_seqno="true"
/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="100"
conn_expiry_timeout="0"
/>
<pbcast.STABLE stability_delay="500"
desired_avg_gossip="5000"
max_bytes="1M"
/>
<pbcast.GMS print_local_addr="true" join_timeout="15000"/>
<pbcast.FLUSH />
<FRAG2 />
</config>
versions
jgroups 3.6.13
infinispan 8.1.0,
hibernate search 5.3
I'm wondering if we can change our jgroups configuration so that the cluster node will eventually be able to rejoin. Even after 12 hours of "attack" so that we don't have to restart the servers.
Define disconnect for me first, please!
Regarding your stack, I have a few suggestions / questions:
I suggest in general to use tcp.xml from the version you use and then modify it according to your needs
TCPPING: does initial_hosts contain all cluster members?
Replace FD with FD_ALL
STABLE: desired_avg_gossip of 5s is a bit small; this generates more traffic than needed
GMS.join_timeout of 15s is quite high; this is the startup time of the first member, and it also influences discovery time
What do you need FLUSH for?
I've a 3 node infinispan cluster with numOwners=2 and I'm running into issues with cluster views when one of the node gets disconnected from the network and joins back. Following are the logs:
(Incoming-1,BrokerPE-0-28575) ISPN000094: Received new cluster view for channel ISPN: [BrokerPE-0-28575|2] (3) [BrokerPE-0-28575, SEM03VVM-201-59385, SEM03VVM-202-33714]
ISPN000094: Received new cluster view for channel ISPN: [BrokerPE-0-28575|3] (2) [BrokerPE-0-28575, SEM03VVM-202-33714] --> one node disconnected
ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[BrokerPE-0-28575|4] (2) [BrokerPE-0-28575, SEM03VVM-201-59385], 2 subgroups: [BrokerPE-0-28575|3] (2) [BrokerPE-0-28575, SEM03VVM-202-33714], [BrokerPE-0-28575|2] (3) [BrokerPE-0-28575, SEM03VVM-201-59385, SEM03VVM-202-33714] --> incorrect merge
Following is my jgroups config:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-3.6.xsd">
<TCP
bind_addr="${jgroups.tcp.address:127.0.0.1}"
bind_port="${jgroups.tcp.port:7800}"
loopback="true"
port_range="30"
recv_buf_size="20m"
send_buf_size="640k"
max_bundle_size="31k"
use_send_queues="true"
enable_diagnostics="false"
sock_conn_timeout="300"
bundler_type="old"
thread_naming_pattern="pl"
timer_type="new3"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="500"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="1"
internal_thread_pool.max_threads="10"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="100"
internal_thread_pool.rejection_policy="Discard"
/>
<!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->
<TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:HostA[7800],HostB[7801]}"
port_range="2"
num_initial_members="3"
ergonomics="false"
/>
<!-- MPING bind_addr="${jgroups.bind_addr:127.0.0.1}" break_on_coord_rsp="true"
mcast_addr="${jboss.default.multicast.address:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43366}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
num_initial_members="3"/-->
<!-- <MPING bind_addr="${jgroups.bind_addr:127.0.0.1}" break_on_coord_rsp="true"
mcast_addr="${jboss.default.multicast.address:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43366}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
num_initial_members="3"/> -->
<MERGE3 max_interval="30000" min_interval="10000"/>
<FD_SOCK bind_addr="${jgroups.bind_addr}"/>
<FD timeout="3000" max_tries="3"/>
<VERIFY_SUSPECT timeout="3000"/>
<!-- <BARRIER /> -->
<!-- <pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/> -->
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100" discard_delivered_msgs="true"/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true" merge_timeout="6000"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<UFC max_credits="2m" min_threshold="0.40"/>
<!-- <MFC max_credits="2m" min_threshold="0.40"/> -->
<FRAG2 frag_size="30k"/>
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
<!-- <pbcast.STATE_TRANSFER/> -->
</config>
I'm using Infinispan 7.0.2 and jgroups 3.6.1 version. I've tried a lot of configs but nothing worked. Your help would be much appreciated.
[UPDATE] Things worked fine after setting the following property to more than 1 : "internal_thread_pool.min_threads".
So to simplify this, we have
View broker|2={broker,201,202}
201 leaves, the view is now broker|3={broker,202}
Then there is a merge between views broker|3 and broker|2, which leads to incorrect view broker|4={broker,201}
I created [1] to investigate what's going on here. First off, the subviews of the merge view should have included 202 being a subgroup coordinator, but that wasn't the case.
Can you describe what exactly happened here? Can this be reproduced? It would be nice to have TRACE level logs for FD,FD_ALL, MERGE3 and GMS...
[1] https://issues.jboss.org/browse/JGRP-2128
I am very new to Mule.
I am facing a problem. I have a requirement where I need to read data from a CSV file which is in D:\ drive & insert data into PostgreSQL Database using Mule for every 2min.
So i have chosen Quartz.
Here is my code :
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<quartz:connector name="myQuartzConnector" validateConnections="true" doc:name="Quartz">
<receiver-threading-profile maxThreadsActive="1"/>
</quartz:connector>
<flow name="QuartzFlow" processingStrategy="synchronous">
<quartz:inbound-endpoint doc:name="Quartz"
jobName="CronJobSchedule" cronExpression="0 0/2 * * * ?"
connector-ref="myQuartzConnector" repeatCount="1">
<quartz:event-generator-job>
<quartz:payload>quartzSchedular started</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<flow-ref name="csvFileToDatabase" doc:name="Flow Reference" />
</flow>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>
This is working fine but in D:\ if i keep csv file mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
If i set polling frequency of file connector with pollingfrequency="120000"(for 2 min 2*60*1000 milliseconds) also quartz is not waiting for 2 min. Within the scheduling time of 2 min., if i place csv file in D:\ mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
Even if i place csv file in D:\, mule Quartz has to perform action for every 2 min. only, for this Which changes do i need to include here..
Could any one pls help me..
It is because the flow csvFileToDatabase has its own inbound-enpoint that will execute regardless of the quartz flow. You have the file:inbound-enpoint set to poll every 5000 milliseconds.
Theres no need to have both an inbound-endpoint and quartz scheduling the flow.
Either change the file:inbound-endpoint frequency or if you really want to use quartz to trigger the flow take a look at the mule request module that allows you to use an inbound-endpoint mid-flow: https://github.com/mulesoft/mule-module-requester
I'm running Tomcat7, the server is quite powerful, 8 GB RAM 8-core.
My problem is that the RES memory is geting higher and higher, until the server just doesn't respond anymore, not even calling OnOutOfMemoryError.
Tomcat configuration :
-Xms1024M
-Xmx2048M
-XX:PermSize=256m
-XX:MaxPermSize=512m
-XX:+UseConcMarkSweepGC
-XX:OnOutOfMemoryError='/var/tomcat/conf/restart_tomcat.sh'
Memory informations :
Memory: Non heap memory = 106 Mb (Perm Gen, Code Cache),
Loaded classes = 14,055,
Garbage collection time = 47,608 ms,
Process cpu time = 4,296,860 ms,
Committed virtual memory = 6,910 Mb,
Free physical memory = 4,906 Mb,
Total physical memory = 8,192 Mb,
Free swap space = 26,079 Mb,
Total swap space = 26,079 Mb
Perm Gen memory: 88 Mb / 512 Mb ++++++++++++
Free disk space: 89,341 Mb
The memory used by Tomcat doesn't look that high compared to the top command.
I also had java.net.SocketException: No buffer space available when trying to connect to SMTP server or when trying to connect to facebook servers.
I use Hibernate, with c3p0 connection pool with this configuration :
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://urldb/schema?autoReconnect=true</property>
<property name="hibernate.connection.username">username</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property>
<property name="hibernate.connection.password"></property>
<property name="connection.characterEncoding">UTF-8</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">300</property>
<property name="hibernate.c3p0.timeout">5000</property>
<property name="hibernate.c3p0.max_size">50</property>
<property name="hibernate.c3p0.min_size">1</property>
<property name="hibernate.c3p0.max_statement">0</property>
<property name="hibernate.c3p0.preferredTestQuery">select 1;</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
I couldn't find anything... does someone have an hint of where I should be looking for ?
Thanks!
[UPDATE 1]HEAP DUMP :
HEAP HISTOGRAM :
class [C 269780 34210054
class [B 5600 33836661
class java.util.HashMap$Entry 221872 6212416
class [Ljava.util.HashMap$Entry; 23797 6032056
class java.lang.String 271170 5423400
class org.hibernate.hql.ast.tree.Node 103588 4972224
class net.bull.javamelody.CounterRequest 28809 2996136
class org.hibernate.hql.ast.tree.IdentNode 23461 2205334
class java.lang.Class 14677 2113488
class org.hibernate.hql.ast.tree.DotNode 13045 1852390
class [Ljava.lang.String; 48506 1335600
class [Ljava.lang.Object; 12997 1317016
Instance Counts for All Classes (excluding platform) :
103588 instances of class org.hibernate.hql.ast.tree.Node
33366 instances of class antlr.ANTLRHashString
28809 instances of class net.bull.javamelody.CounterRequest
24436 instances of class org.apache.tomcat.util.buf.ByteChunk
23461 instances of class org.hibernate.hql.ast.tree.IdentNode
22781 instances of class org.apache.tomcat.util.buf.CharChunk
22331 instances of class org.apache.tomcat.util.buf.MessageBytes
13045 instances of class org.hibernate.hql.ast.tree.DotNode
10024 instances of class net.bull.javamelody.JRobin
9084 instances of class org.apache.catalina.loader.ResourceEntry
7931 instances of class org.hibernate.hql.ast.tree.SqlNode
[UPDATE 2] server.xml :
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
URIEncoding="UTF-8"
maxThreads="150"
minSpareThreads="25"
maxSpareThreads="75"
enableLookups="false"
acceptCount="1024"
server="unknown"
address="public_ip"
/>
****[UPDATE 3] Output from log files : ****
2012-06-04 06:18:24,152 [http-bio-ip-8080-exec-3500] ERROR org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/api].[Jersey REST Service]- Servlet.ser
vice() for servlet [Jersey REST Service] in context with path [/socialapi] threw exception
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:532)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:501)
at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:563)
at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:118)
at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:326)
at org.apache.coyote.Request.doRead(Request.java:422)
[UPDATE 4] ServletContext
I use a ServletContextListenerin my application to instanciate controllers and keep a reference with event.getServletContext().setAttribute. Those controllers loads configurations and translations (the 88Mb in Perm).
Then to use the database i use :
SessionFactory sf = dbManager.getSessionFactory(DatabaseManager.DB_KEY_DEFAULT);
Session session = sf.openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
//Do stuuf
tx.commit();
} catch (Exception e){
//Do something
} finally {
session.close();
}
Could this be the source of a leak ?
Why not to use Manual transaction/session, and how would you do then ?
Try with this parameter:
+XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpPath=dump.log
Also try with lower start memory parameters -Xms.
then you can inspect the dump to see if the problem was object allocation.
While running try
jps
That will output all java processes, lets say Tomcat is PID 4444:
jmap -dump:format=b,file=heapdump 4444
And
jhat heapdump
If you run out of memory while executing jhat just add more memory. From there you can inspect the heap of your application.
Another way to go is to enable Hibernate statistics to check that you are not retrieving more objects. Although it looks like a full garbage collection every hour should not be a problem (room for do it better there).
-verbose:gc -Xloggc:/opt/tomcat/logs/gc.out -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
And with GCViewer for example take a look at every space of memory (ternured, eden, survivors, perm).
Another handy tool:
jstack 4444 > stack.txt
That will retrieve a full stack trace of every thread running inside the java process with pid 4444.
Bear in mind that you need privileges if you started Tomcat as root or another user.
jps
won't output process which you have no privileges, therefore you cannot connect to it.
Since I don't know what your application is about (and therefore I don't know its requirements) 3 million instances looks like a lot.
With Hibernate statistics you can see which classes you instantiate the most.
Then tunning the proportions of your eden and ternured garbage recolection can be more efficient.
Newly instantiated objects goes to eden. When it fills up a minor gc triggers. What is not deleted goes to a survivor space. When this fills up it goes to ternured. Full gc will arise when ternured is full.
In this picture (which is inaccurate) I left aside String that become interned and Memory mapped files (that are not in heap). Take a look at which classes you instantiate most. Intensive use of String might lead to quickly fill up perm.
I guess you do so, but use a managed session factory, such as Spring (if in your stack) and avoid manually management of transactions and sessions.
Keep in mind that objects are deleted in the GC when no object refers to it. So as long as a object is reachable in your application the object remain.
If your ServletContextListener instantiate controllers and are stored in the event getServletContext. Make sure you completely remove the reference afterwards, if you keep a reference the objects won't be deleted, since they are still reachable.
If you manage your own transactions and session (which is fine if you cannot use a framework) then you must deal with code maintenance and bugs that Spring-tx for instance has solved and improved.
I personally would take advantage of FOSS. But of course sometime you cannot enlarge the stack.
If you are using Hibernate I would take a look at Spring-orm and Spring-tx to manage transactions and session. Also take a look at Hibernate patter Open Session In View.
I'd also recommend that you download Visual VM 1.3.3, install all the plugins, and attach it to the Tomcat PID so you can see what's happening in real time. Why wait for a thread dump? It'll also tell you CPU, threads, all heap generations, which objects consume the most memory, etc.