Erroneous Boolean Mapping Hibernate (ArrayIndexOutOfBoundsException) - java

I have a persistend Book Class with the following properties
PropertyName -> HibernateMappingType -> JavaType
id -> long -> long
title -> text -> String
author -> string -> String
systemId -> long -> long
status -> boolean -> boolean
fullClassification -> string -> string
And my table description looks like this:
So far everything seems good, but when I try to fetch all the values in the table I get the following Exception Message:
20:04:43,832 TRACE BasicExtractor:61 - extracted value ([classifi1_1_0_] : [BIGINT]) - [11]
20:04:43,832 TRACE BasicExtractor:61 - extracted value ([collecti1_2_1_] : [BIGINT]) - [11]
20:04:43,833 TRACE BasicExtractor:61 - extracted value ([book_id1_0_2_] : [BIGINT]) - [1]
20:04:43,839 TRACE BasicExtractor:61 - extracted value ([classifi2_1_0_] : [VARCHAR]) - [Prueba]
20:04:43,841 TRACE BasicExtractor:61 - extracted value ([collecti2_2_1_] : [VARCHAR]) - [Prueba]
20:04:43,841 TRACE BasicExtractor:61 - extracted value ([book_tit2_0_2_] : [LONGVARCHAR]) - [Libro de Prueba (No Existe) ]
20:04:43,842 TRACE BasicExtractor:61 - extracted value ([book_aut3_0_2_] : [LONGVARCHAR]) - [Jonathan Pichardo]
20:04:43,842 TRACE BasicExtractor:61 - extracted value ([book_sys4_0_2_] : [BIGINT]) - [190996]
java.lang.ArrayIndexOutOfBoundsException: 57
at com.mysql.cj.mysqla.MysqlaUtils.bitToLong(MysqlaUtils.java:68)
at com.mysql.cj.core.io.MysqlTextValueDecoder.decodeBit(MysqlTextValueDecoder.java:231)
at com.mysql.cj.jdbc.ResultSetRow.decodeAndCreateReturnValue(ResultSetRow.java:170)
at com.mysql.cj.jdbc.ResultSetRow.getValueFromBytes(ResultSetRow.java:269)
at com.mysql.cj.jdbc.BufferRow.getValue(BufferRow.java:349)
at com.mysql.cj.jdbc.ResultSetImpl.getNonStringValueFromRow(ResultSetImpl.java:813)
at com.mysql.cj.jdbc.ResultSetImpl.getBoolean(ResultSetImpl.java:904)
at com.mysql.cj.jdbc.ResultSetImpl.getBoolean(ResultSetImpl.java:908)
at org.hibernate.type.descriptor.sql.BooleanTypeDescriptor$2.doExtract(BooleanTypeDescriptor.java:59)
at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:47)
etc etc etc
The code I'm running is:
Session session = SessionFactoryHandler.buildIfNeeded().
openSession();
Criteria crit = session.createCriteria( Book.class );
crit.list();
session.close();
SessionFactoryHandler.closeFactory();
As I understand it is happening on the status property I just don't know why, if I comment the mapping property in the xml it works perfectly but with it it throws always the same exception with the same index 57, it doesn't make a difference the value of that column in the database (which has only one registry).
The mapping file is like follows:
<hibernate-mapping package="com.cetys.librarymanagement">
<class name="com.cetys.librarymanagement.Core.DomainModels.Book" table="book">
<meta attribute="class-description">
This class contains the whole description of a Book,
according to the specification in ALTAIR system.
</meta>
<id name="id" type="long" column="book_id">
</id>
<property name="title" column="book_title" type="text" length="500" not-null="true"/>
<property name="author" column="book_author" type="text" not-null="true"/>
<property name="systemId" column="book_system_id" type="long" not-null="true"/>
<property name="status" column="book_status" type="boolean" not-null="true"/>
<property name="fullClassification" column="book_full_classification"
type="string" not-null="true"/>
<many-to-one name="classification" column="classification_id"
class="com.cetys.librarymanagement.Core.DomainModels.Classification" not-null="true"
unique="false" cascade="save-update" fetch="join"/>
<many-to-one name="collection" column="collection_id"
class="com.cetys.librarymanagement.Core.DomainModels.Collection" not-null="false"
unique="false" cascade="save-update" fetch="join"/>
</class>
</hibernate-mapping>
Any ideas?

From what I see you are trying to map the BIT type in Database to Boolean in your hibernate code.
There is a bug in MySQL with BIT Value, from version 5.0.3 onwards, in that it does not store a single BIT value. It stores something like SET or ENUM. And that often throws up issues, when you are doing a numeric value comparison. For more details check here
http://www.xaprb.com/blog/2006/04/11/bit-values-in-mysql/
You could ask your DBA to change the datatype to tinyint, however if that is not possible, you can change the status mapping from boolean to numeric_boolean, so would be something like
<property name="status" column="book_status" type="numeric_boolean" not-null="true"/>

There are two ways you can achieve the type conversion of attribute
By annotating field with Type
#Type(type = "yes_no")
private boolean isActive;
in DB Y/N will get persisted.
By writing a converter
#Column
#Convert(converter = BooleanConverter.class)
private boolean isActive;
Converter class
public class BooleanConverter implements AttributeConverter<Boolean, Character> {
#Override
public Character convertToDatabaseColumn(Boolean attribute) {
if (attribute)
return 'Y';
else
return 'N';
}
#Override
public Boolean convertToEntityAttribute(Character dbData) {
if ('Y' == dbData)
return true;
else
return false;
}
}
in xml you can replace the type attribute with the converter class name or yes_no
<property name="status" column="book_status" type="yes_no" not-null="true"/>

Related

nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:

I am new to Java, I am trying to call a stored procedure from Java.
I am getting this exception for this code
#SuppressWarnings("unchecked")
List<Object> queryForList(String procName, Map<String, Object> map, CommonVO commonVO) {
List<Object> resultSetList = null;
Date connectionAttemptStartDate = null;
Date cidbQueryStartDate = null;
List<String> jndiNameList =null;
String jndiName = null;
try {
resultSetList = super.queryForList(procName, map);
}catch(Exception e) {
isError = true;
//exception = e;
System.out.println(e.getMessage());
exception=ExceptionUtils.validateCreateCIDBException(e);
}
return resultSetList;
}
My XML file here
<!DOCTYPE sqlMap PUBLIC "-//iBATIS.com//DTD SQL Map 2.0//EN" "http://www.ibatis.com/dtd/sql-map-2.dtd">
<sqlMap namespace="SP">
<parameterMap id="objectParameters" class="java.util.Map">
<parameter property="valueAddedOfferList" jdbcType="VALUE_ADDED_OFFER_LIST" typeName="VALUE_ADDED_OFFER_LIST" typeHandler="com.bam.vision.dao.db.typehandler.ValueAddedOfferListTypeHandlerCallback" mode="OUT" />
<parameter property="errorCode" jdbcType="INTEGER" javaType="java.lang.String" mode="OUT"/>
<parameter property="errorMessage" jdbcType="CHAR" javaType="java.lang.String" mode="OUT"/>
</parameterMap>
<procedure id="retrieve_value_added_offer" parameterMap="objectParameters" timeout="2">
{call retrieve_value_added_offer(?,?,?)}
</procedure>
</sqlMap>
Getting exception like here
--- Cause: java.lang.ClassCastException: java.math.BigDecimal incompatible with java.lang.String; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred in retrieve_value_added_offer.xml.
--- The error occurred while applying a parameter map.
--- Check the SP.objectParameters.
--- Check the output parameters (retrieval of output parameters failed).
--- Cause: java.lang.ClassCastException: java.math.BigDecimal incompatible with java.lang.String
This line in your XML file looks suspicious to me:
<parameter property="errorCode" jdbcType="INTEGER" javaType="java.lang.String" mode="OUT"/>
What type is the errorCode code parameter in your stored procedure?
You have jdbcType="INTEGER", which suggests it's a number, but you also have javaType="java.lang.String", which suggests it's a string. Clearly it can't be both. You haven't included the declaration of your stored procedure so it's impossible to tell which one it actually is.
If your error code is an integer, try changing javaType="java.lang.String" to javaType="java.lang.Integer" (or if that fails, javaType="java.math.BigDecimal" instead). If your error code is a string, change jdbcType="INTEGER" to jdbcType="STRING".

How to output 'GETDATE()' string into string column with Liquibase 3.5 for MSSQL with loadData change?

I have liquibase loadData change:
<changeSet author="author" id="13">
<sql>SET IDENTITY_INSERT M_Conversion ON</sql>
<loadData tableName="M_Conversion" separator="~" file="data/13-M_Conversion.csv">
<column name="MconID" type="NUMERIC" />
<column name="MdbtID" type="NUMERIC" />
<column name="MlokID" type="NUMERIC" />
<column name="MconKeyword" type="STRING" />
<column name="MconExtra" type="STRING" />
</loadData>
<sql>SET IDENTITY_INSERT M_Conversion OFF</sql>
<rollback>
<sql>SET IDENTITY_INSERTM_Conversion OFF</sql>
</rollback>
</changeSet>
In the 13-M_Conversion.csv file, there is one value that causes problems for MSSQL: 201~2~200~GETDATE()~ the GETDATE() value for column MconKeyword is not quoted as string in the resulting sql
The sql that the liquibase generates for this data is:
INSERT INTO [pd].[M_Conversion] ([MconID], [MdbtID], [MlokID], [MconKeyword], [MconExtra]) VALUES (201, 2, 200, GETDATE()); but it should be (201, 2, 200, 'GETDATE()')
Problem is, that in method liquibase.sqlgenerator.core.InsertGenerator#generateValues for String values there is call to check if the string looks like function call
public boolean looksLikeFunctionCall(String value, Database database) {
return value.startsWith("\"SYSIBM\"") || value.startsWith("to_date(") || value.equalsIgnoreCase(database.getCurrentDateTimeFunction());
}
and for MSSQLDatabase GETDATE() is exactly currentDateTimeFunction
So the question is, how do I get this specific value to be escaped properly with loadData change?

Solr Custom Transfomer Not Working?

I am trying to Add Few Fields to solr Index While indexing with Data Import Handler.
Below is My data-config.xml
<dataConfig>
<dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/db" user="*****" password="*********"/>
<script><![CDATA[
function addMergedPdt(row)
{
var m = row.get('mergedPdt');
if(m == null)
{
row.put('mergedPdt',9999999999);
}
return row;
}
]]></script>
<script><![CDATA[
function transform(row)
{
if(row.get(mergedPdt) == null)
{
row.put('catStock', 0);
row.put('catPxMrp',0);
row.put('catPrice',0);
row.put('catCount',1)
row.put('catRating',0);
row.put('catAval',0);
return row;
}
else
{
row.put('catAval',1);
return row;
}
}
]]></script>
<document>
<entity name="product" onError="continue" transformer="script:addMergedPdt" query="select p.id, name, image, stock, lower(p.seller) as seller, brand,
cast(price as signed) as price, cast(pxMrp as signed) as pxMrp, mergedPdt, shipDays, url, cast(rating as signed) as rating,
disc(price, pxMrp) as discount, mc.node as seller_cat, oc.node as cat, substring_index(oc.node, '|', 1) as cat1,
substring(substring_index(oc.node, '|', 2), length(substring_index(oc.node, '|', 1)) + 2) as cat2,
substring(substring_index(oc.node, '|', 3), length(substring_index(oc.node, '|', 2)) + 2) as cat3
from _products as p, _mergedCat as mc, _ourCat as oc where active = 1 and cat_id = mc.id and ourCat = oc.id and
('${dataimporter.request.full}' != 'false' OR last_visited > '${dataimporter.last_index_time}') limit 10000">
<!-- To Papulate the Catalog Data -->
<entity name="mergedPdt" transformer="script:transform" onError="continue" query = "SELECT mergedPdt,count(*) as catCount,cast(max(stock) as signed) as catStock,cast(max(pxMrp) as signed) as catPxMrp,cast(min(price) as signed) as catPrice,cast(avg(rating) as signed) as catRating FROM `_products` where mergedPdt = ${product.mergedPdt}"/>
</entity>
I am Getting some Error Like
org.apache.solr.handler.dataimport.DataImportHandlerException: Error invoking script for entity mergedPdt Processing Document # 10000
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:70)
at org.apache.solr.handler.dataimport.ScriptTransformer.transformRow(ScriptTransformer.java:59)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.applyTransformer(EntityProcessorWrapper.java:198)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:256)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
Caused by: java.lang.NoSuchMethodException: no such method: transform
at com.sun.script.javascript.RhinoScriptEngine.invoke(RhinoScriptEngine.java:286)
at com.sun.script.javascript.RhinoScriptEngine.invokeFunction(RhinoScriptEngine.java:258)
at org.apache.solr.handler.dataimport.ScriptTransformer.transformRow(ScriptTransformer.java:55)
... 10 more
and all fields are being Indexed excluding the extra fields which i tried adding using the transformer.
And surprisingly only one field "catCount" has been indexed.
You can trust me and i am confident regarding the schema definition and other configurations.
any lead will be higly appriciated ??
Thanks in advance :)

Oracle Stored Procedure - Spring Integration - OUT Type Object

Is it possible to have Oracle type object as an output from a stored procedure, calling same using spring integration?
For example, I have the following in the database:
create or replace TYPE ESP_TRAINING_REQ_OBJ AS OBJECT
(
v_param1 varchar2(25),
v_param2 varchar2(25)
);
create or replace TYPE ESP_TRAINING_RESP_OBJ AS OBJECT
(
v_param1 varchar2(25),
v_param2 varchar2(25)
);
create or replace PROCEDURE TEST_PROC (
v_req_obj IN ESP_TRAINING_REQ_OBJ,
v_resp_obj OUT ESP_TRAINING_RESP_OBJ
) AS
BEGIN
v_resp_obj := ESP_TRAINING_RESP_OBJ(v_req_obj.v_param2, v_req_obj.v_param1);
dbms_output.put_line('TEST_PROC');
END;
However, when I try calling it, I'm getting the following exception:
PLS-00306: wrong number or types of arguments in call to 'TEST_PROC'
Please find spring integration configuration bellow:
<int-jdbc:stored-proc-outbound-gateway
id="ESP_TRAINING" request-channel="inputChannel"
stored-procedure-name="TEST_PROC" data-source="dataSource"
reply-channel="outputChannel"
skip-undeclared-results="false" ignore-column-meta-data="true">
<int-jdbc:sql-parameter-definition name="v_req_obj" direction="IN" type="STRUCT" />
<int-jdbc:sql-parameter-definition name="v_resp_obj" direction="OUT" type="STRUCT" />
<int-jdbc:parameter name="v_req_obj" expression="payload.v_req_obj"/>
</int-jdbc:stored-proc-outbound-gateway>
Please note it works fine if we change SP declaration above, for using STRUCT in request only, for example replacing ESP_TRAINING_RESP_OBJ by a VARCHAR or any other Oracle primitive data type.
For Example:
create or replace PROCEDURE TEST_PROC (
v_req_obj IN ESP_TRAINING_REQ_OBJ,
v_status OUT VARCHAR2
) AS
BEGIN
v_status := v_req_obj.v_param1 || ' and ' || v_req_obj.v_param2;
dbms_output.put_line('TEST_PROC');
END;
I've fixed it doing following:
Updated spring integration version to 3.0.0.RELEASE,
giving support to both type-name and return-type attribues, inside sql-parameter-definition.
Updated stored procedured declaration as follows:
<int-jdbc:stored-proc-outbound-gateway
id="ESP_TRAINING" request-channel="inputChannel"
stored-procedure-name="TEST_PROC" data-source="dataSource"
reply-channel="outputChannel"
skip-undeclared-results="false" ignore-column-meta-data="true">
<int-jdbc:sql-parameter-definition name="v_req_obj" direction="IN" type="STRUCT" />
<int-jdbc:sql-parameter-definition name="v_resp_obj" direction="OUT" type="STRUCT" type-name="ESP_TRAINING_RESP_OBJ" return-type="espTrainingRespObj" />
<int-jdbc:parameter name="v_req_obj" expression="payload.v_req_obj"/>
</int-jdbc:stored-proc-outbound-gateway>
<beans:bean id="espTrainingRespObj" class="com.hsbc.esp.EspTrainingRespObj"/>
Changed EspTrainingRespObj to implement SQLReturnType, as follows:
public class EspTrainingRespObj implements SqlReturnType {
private String param1;
private String param2;
public Object getTypeValue(CallableStatement cs, int paramIndex, int sqlType, String typeName)
throws SQLException {
Object[] attributes = ((STRUCT) cs.getObject(paramIndex)).getAttributes();
this.param1 = (String) attributes[0];
this.param2 = (String) attributes[1];
return this;
}
...
}
The return-type on the <int-jdbc:sql-parameter-definition> for OUT param and SqlReturnStruct must help you to solve the issue.
The test-case in the Framework source codes contains this sample for CLOB handling:
<int-jdbc:stored-proc-outbound-gateway request-channel="getMessageChannel"
data-source="dataSource"
stored-procedure-name="GET_MESSAGE"
ignore-column-meta-data="true"
expect-single-result="true"
reply-channel="output2Channel">
<int-jdbc:sql-parameter-definition name="message_id"/>
<int-jdbc:sql-parameter-definition name="message_json" type="CLOB" direction="OUT" type-name="" return-type="clobSqlReturnType"/>
<int-jdbc:parameter name="message_id" expression="payload"/>
</int-jdbc:stored-proc-outbound-gateway>
<bean id="clobSqlReturnType" class="org.mockito.Mockito" factory-method="spy">
<constructor-arg>
<bean class="org.springframework.integration.jdbc.storedproc.ClobSqlReturnType"/>
</constructor-arg>
</bean>

Postgresql infinite loop after a hibernate criteria list

I'm stucked with a database problem for several days now. The application hangs after a specific hibernate criteria.list(). Exactly by the following stacktrace:
java.net.SocketInputStream.read(byte[], int, int)
org.postgresql.core.VisibleBufferedInputStream.readMore(int)
org.postgresql.core.VisibleBufferedInputStream.ensureBytes(int)
org.postgresql.core.VisibleBufferedInputStream.read()
org.postgresql.core.PGStream.ReceiveChar()
org.postgresql.core.v3.QueryExecutorImpl.processResults(ResultHandler, int)
org.postgresql.core.v3.QueryExecutorImpl.execute(Query, ParameterList, ResultHandler, int, int, int)
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(Query, ParameterList, int)
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(int)
org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery()
org.hibernate.internal.CriteriaImpl.list()
After some researches and tests I found that the problem is not a blocking query, but a query that is executed forever.
It's a java spring application with the following sessionFactory and transaction manager configuration:
<bean id="dataSource" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/database" />
<property name="username" value="username" />
<property name="password" value="password" />
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="packagesToScan" value="com.myapp.domain" />
<property name="configLocation" value="/WEB-INF/hibernate.cfg.xml" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
The underlying database is PostgreSQL and here is the current hibernate configuration
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.PostgreSQLDialect</property>
<property name="hbm2ddl.auto">none</property>
<property name="hibernate.cache.region.factory_class">org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory</property>
<property name="hibernate.cache.use_second_level_cache">true</property>
<property name="cache.use_query_cache">true</property>
<property name="hibernate.transaction.factory_class">
org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="show_sql">false</property>
<property name="format_sql">true</property>
<property name="use_sql_comments">false</property>
<property name="order_updates">true</property>
</session-factory>
</hibernate-configuration>
The critical area in the code is:
private void fillEmptyNames() throws CablewatchException {
List<Device> devicesList = deviceDao.getDevices();
if (devicesList != null) {
for (Device device : devicesList {
String name = deviceDao.getDeviceName(device.getModule().getObjectIdentifier(), device.getSubrack(), device.getSlot());
...
}
}
}
The application hangs on the second dao function "getDeviceName". which is implemented as follow:
#Transactional(timeout=30)
public String getDeviceName(long moduleId, int subrackNr, int slotNr) throws CablewatchException {
Criteria criteria = sessionFactory.getCurrentSession().createCriteria(Device.class).add(Restrictions.eq("module.objectIdentifier", moduleId)).add(Restrictions.eq("subrack",subrackNr)).add(Restrictions.eq("slot",slotNr)).addOrder(Order.desc("objectIdentifier")).setMaxResults(1);
List<Device> devicesList = criteria.list();
if (devicesList != null && !devicesList.isEmpty() && devicesList.get(0) instanceof Device) {
Device device = devicesList.get(0);
return device.getName();
}
return null;
}
Also a detail I'm confronted with is that the same passage works fine under Windows, so currently the problem is only happening on Linux.
UPDATE:
The generated query is:
select
this_.objectIdentifier as objectId1_0_9_,
this_.ackId as ackId11_0_9_,
this_.alarmInfoId as alarmIn12_0_9_,
this_.cleared as cleared2_0_9_,
this_.clearedTime as clearedT3_0_9_,
this_.logIndex as logIndex4_0_9_,
this_.module as module5_0_9_,
this_.neId as neId13_0_9_,
this_.occurenceTime as occurenc6_0_9_,
this_.serial as serial7_0_9_,
this_.severityId as severit14_0_9_,
this_.slot as slot8_0_9_,
this_.subrack as subrack9_0_9_,
this_.value as value10_0_9_,
acknowledg2_.objectIdentifier as objectId1_2_0_,
acknowledg2_.comment as comment2_2_0_,
acknowledg2_.username as username3_2_0_,
alarminfo3_.objectIdentifier as objectId1_1_1_,
alarminfo3_.cw_alarmMessage as cw_alarm2_1_1_,
alarminfo3_.cw_alarmOid as cw_alarm3_1_1_,
ne4_.OBJECTIDENTIFIER as OBJECTID1_8_2_,
ne4_.cw_neActive as cw_neAct2_8_2_,
ne4_.cw_neCategory as cw_neCat3_8_2_,
ne4_.cw_neFirmware as cw_neFir4_8_2_,
ne4_.cw_neHasWebInterface as cw_neHas5_8_2_,
ne4_.cw_neInetAddress as cw_neIne6_8_2_,
ne4_.cw_neInfo as cw_neInf7_8_2_,
ne4_.cw_neMacAddress as cw_neMac8_8_2_,
ne4_.cw_neModel as cw_neMod9_8_2_,
ne4_.cw_neName as cw_neNa10_8_2_,
ne4_.cw_neSerial as cw_neSe11_8_2_,
ne4_.cw_neSysDescription as cw_neSy12_8_2_,
ne4_.cw_neType as cw_neTy13_8_2_,
ne4_.cw_installationDate as cw_inst14_8_2_,
ne4_.cw_instance as cw_inst15_8_2_,
ne4_.cw_lastAlarmLogIndex as cw_last16_8_2_,
ne4_.cw_locationId as cw_loca19_8_2_,
ne4_.cw_readCommunity as cw_read17_8_2_,
ne4_.cw_severityId as cw_seve20_8_2_,
ne4_.cw_writeCommunity as cw_writ18_8_2_,
location5_.objectIdentifier as objectId1_5_3_,
location5_.cw_imageName as cw_image2_5_3_,
location5_.cw_locationCity as cw_locat3_5_3_,
location5_.cw_locationCode as cw_locat4_5_3_,
location5_.cw_locationContact as cw_locat5_5_3_,
location5_.cw_locationDescription1 as cw_locat6_5_3_,
location5_.cw_locationDescription2 as cw_locat7_5_3_,
location5_.cw_locationName as cw_locat8_5_3_,
location5_.cw_locationStreet as cw_locat9_5_3_,
location5_.cw_locationType as cw_loca10_5_3_,
location5_.cw_parentLocationId as cw_pare11_5_3_,
location5_.cw_severityId as cw_seve12_5_3_,
location5_.cw_sublocationSeverityId as cw_subl13_5_3_,
location6_.objectIdentifier as objectId1_5_4_,
location6_.cw_imageName as cw_image2_5_4_,
location6_.cw_locationCity as cw_locat3_5_4_,
location6_.cw_locationCode as cw_locat4_5_4_,
location6_.cw_locationContact as cw_locat5_5_4_,
location6_.cw_locationDescription1 as cw_locat6_5_4_,
location6_.cw_locationDescription2 as cw_locat7_5_4_,
location6_.cw_locationName as cw_locat8_5_4_,
location6_.cw_locationStreet as cw_locat9_5_4_,
location6_.cw_locationType as cw_loca10_5_4_,
location6_.cw_parentLocationId as cw_pare11_5_4_,
location6_.cw_severityId as cw_seve12_5_4_,
location6_.cw_sublocationSeverityId as cw_subl13_5_4_,
severity7_.id as id1_15_5_,
severity7_.cw_severityColor as cw_sever2_15_5_,
severity7_.cw_severityName as cw_sever3_15_5_,
severity8_.id as id1_15_6_,
severity8_.cw_severityColor as cw_sever2_15_6_,
severity8_.cw_severityName as cw_sever3_15_6_,
severity9_.id as id1_15_7_,
severity9_.cw_severityColor as cw_sever2_15_7_,
severity9_.cw_severityName as cw_sever3_15_7_,
severity10_.id as id1_15_8_,
severity10_.cw_severityColor as cw_sever2_15_8_,
severity10_.cw_severityName as cw_sever3_15_8_
from
CW_ALARM this_
left outer join
CW_Acknowledgment acknowledg2_
on this_.ackId=acknowledg2_.objectIdentifier
left outer join
CW_ALARMINFO alarminfo3_
on this_.alarmInfoId=alarminfo3_.objectIdentifier
left outer join
CW_NE ne4_
on this_.neId=ne4_.OBJECTIDENTIFIER
left outer join
CW_LOCATION location5_
on ne4_.cw_locationId=location5_.objectIdentifier
left outer join
CW_LOCATION location6_
on location5_.cw_parentLocationId=location6_.objectIdentifier
left outer join
CW_SEVERITY severity7_
on location6_.cw_severityId=severity7_.id
left outer join
CW_SEVERITY severity8_
on location6_.cw_sublocationSeverityId=severity8_.id
left outer join
CW_SEVERITY severity9_
on ne4_.cw_severityId=severity9_.id
left outer join
CW_SEVERITY severity10_
on this_.severityId=severity10_.id
where
this_.neId=?
and this_.subrack=?
and this_.slot=?
and this_.module<>?
order by
this_.objectIdentifier desc limit ?
I executed it from pgAdmin (and replaced the parameters with their values) and it works fine. Below is the query plan:
"Limit (cost=25819.66..25819.66 rows=1 width=1185)"
" -> Sort (cost=25819.66..25819.66 rows=1 width=1185)"
" Sort Key: this_.objectidentifier"
" -> Nested Loop Left Join (cost=0.00..25819.65 rows=1 width=1185)"
" -> Nested Loop Left Join (cost=0.00..25811.37 rows=1 width=1021)"
" -> Nested Loop Left Join (cost=0.00..25803.09 rows=1 width=857)"
" -> Nested Loop Left Join (cost=0.00..25799.21 rows=1 width=693)"
" -> Nested Loop Left Join (cost=0.00..25795.33 rows=1 width=529)"
" -> Nested Loop Left Join (cost=0.00..25793.45 rows=1 width=464)"
" Join Filter: (ne4_.cw_locationid = location5_.objectidentifier)"
" -> Nested Loop Left Join (cost=0.00..25791.22 rows=1 width=399)"
" Join Filter: (this_.neid = ne4_.objectidentifier)"
" -> Nested Loop Left Join (cost=0.00..25788.76 rows=1 width=225)"
" -> Nested Loop Left Join (cost=0.00..25780.48 rows=1 width=150)"
" Join Filter: (this_.ackid = acknowledg2_.objectidentifier)"
" -> Seq Scan on cw_alarm this_ (cost=0.00..25779.32 rows=1 width=132)"
" Filter: (((module)::text <> ''::text) AND (neid = 471) AND (subrack = (-1)) AND (slot = (-1)))"
" -> Seq Scan on cw_acknowledgment acknowledg2_ (cost=0.00..1.07 rows=7 width=18)"
" -> Index Scan using cw_alarminfo_pkey on cw_alarminfo alarminfo3_ (cost=0.00..8.27 rows=1 width=75)"
" Index Cond: (this_.alarminfoid = objectidentifier)"
" -> Seq Scan on cw_ne ne4_ (cost=0.00..2.45 rows=1 width=174)"
" Filter: (objectidentifier = 471)"
" -> Seq Scan on cw_location location5_ (cost=0.00..2.10 rows=10 width=65)"
" -> Index Scan using cw_location_pkey on cw_location location6_ (cost=0.00..1.87 rows=1 width=65)"
" Index Cond: (location5_.cw_parentlocationid = objectidentifier)"
" -> Index Scan using cw_severity_pkey on cw_severity severity7_ (cost=0.00..3.87 rows=1 width=164)"
" Index Cond: (location6_.cw_severityid = id)"
" -> Index Scan using cw_severity_pkey on cw_severity severity8_ (cost=0.00..3.87 rows=1 width=164)"
" Index Cond: (location6_.cw_sublocationseverityid = id)"
" -> Index Scan using cw_severity_pkey on cw_severity severity9_ (cost=0.00..8.27 rows=1 width=164)"
" Index Cond: (ne4_.cw_severityid = id)"
" -> Index Scan using cw_severity_pkey on cw_severity severity10_ (cost=0.00..8.27 rows=1 width=164)"
" Index Cond: (this_.severityid = id)"
Following the detail about Linux and Windows version, I've tried the same test but instead of the local database postgresql_9.1.13 (Debian) I used a remote access to postgresql_9.3.5(Windows) and installed and tried postgresql_9.1.13(Windows). Both worked correctly.
I also tried the same code from my Windows system to the remote postgresql_9.1.13(Debian) and another machine with a remote postgresql_9.1.15(Debian). In both cases the problem occurs.
It seems the problem could lay on the Linux version of postgresql_9.1.x.
Thanks in advance.
After debugging the database with the commands Craig and Clemens provided me (display current queries, pg_stat_activity, explain analyze select, pg_locks etc.), I found out that it isn't an infinite loop but due to an access for more than 20.000 entries the running time was very long. And due to the ORM layer this time was even extended to exceed several hours. I'm working on a small redesign of the database to optimize this issue.
Thanks guys for the support.

Categories

Resources