InvalidRequestException(why:line 1:184 mismatched character ')' expecting '-') - java

when I tried to save a table to cassandra using persist() method and kundera framework, i receive the error:
18976 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO c.i.c.c.CassandraClientBase - Returning cql query INSERT INTO "pieces"("width","depth","height","idpiece") VALUES(10.0,12.0,11.0,'1') .
18998 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO d.c.DatabaseController - insert piece to database: SUCCESS
18998 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO d.d.SensorDAOImpl - start to insert data
19011 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO c.i.c.c.CassandraClientBase - Returning cql query INSERT INTO "sensors"("event_time","temperature","pressure","IdSensor","date","this$0") VALUES(1462959800344,10.0,10.0,'1',150055,sensor.entitie.predefinedModel.SensorEntitie#1c4a9b7b) .
19015 [Thread-15-localhostAMQPbolt0-executor[2 2]] ERROR c.i.c.c.CassandraClientBase - Error while executing query INSERT INTO "sensors"("event_time","temperature","pressure","IdSensor","date","this$0") VALUES(1462959800344,10.0,10.0,'1',150055,sensor.entitie.predefinedModel.SensorEntitie#1c4a9b7b)
19015 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO c.i.c.c.CassandraClientBase - Returning delete query DELETE FROM "pieces" WHERE "idpiece" = '1'.
19018 [Thread-15-localhostAMQPbolt0-executor[2 2]] ERROR o.a.s.util - Async loop died!
java.lang.RuntimeException: com.impetus.kundera.KunderaException: com.impetus.kundera.KunderaException: InvalidRequestException(why:line 1:184 mismatched character ')' expecting '-')
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:448) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:414) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.daemon.executor$fn__8226$fn__8239$fn__8292.invoke(executor.clj:851) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.0.jar:1.0.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_99]
Caused by: com.impetus.kundera.KunderaException: com.impetus.kundera.KunderaException: InvalidRequestException(why:line 1:184 mismatched character ')' expecting '-')
at com.impetus.kundera.persistence.EntityManagerImpl.persist(EntityManagerImpl.java:180) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at database.dao.SensorDAOImpl.insert(SensorDAOImpl.java:54) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at database.controller.DatabaseController.saveSensorEntitie(DatabaseController.java:49) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at connector.bolt.PrinterBolt.execute(PrinterBolt.java:66) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at org.apache.storm.daemon.executor$fn__8226$tuple_action_fn__8228.invoke(executor.clj:731) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__8147.invoke(executor.clj:463) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.disruptor$clojure_handler$reify__7663.onEvent(disruptor.clj:40) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:435) ~[storm-core-1.0.0.jar:1.0.0]
... 6 more
as you see I want to use onetomany
my class piece entity
#Entity
#Table(name = "pieces", schema = "mykeyspace#cassandra_pu")
public class PieceEntitie implements Serializable{
#Id
private String IdPiece;
#Column
private double width;
#Column
private double height;
#Column
private double depth;
my class sensor entity
#EmbeddedId
private CompoundKey key;
#Column
private float temperature;
#Column
private float pressure;
#OneToMany(cascade = { CascadeType.ALL }, fetch = FetchType.EAGER)
#JoinColumn(name="idsensor")
private List<PieceEntitie> pieces;
#Embeddable
public class CompoundKey
{
#Column
private String IdSensor;
#Column
private long date;
#Column(name = "event_time")
private long eventTime;
}
my tables:
CREATE TABLE mykeyspace.sensors (
idsensor text,
date bigint,
event_time timestamp,
pressure float,
temperature float,
PRIMARY KEY ((idsensor, date), event_time)
) WITH CLUSTERING ORDER BY (event_time ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
cqlsh:sensor> DESCRIBE table pieces ;
CREATE TABLE mykeyspace.pieces (
idpiece text PRIMARY KEY,
depth double,
height double,
idsensor text,
width double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
tutorial followed; https://github.com/impetus-opensource/Kundera/wiki/Polyglot-Persistence
how can i resolve this problem ?

I resolved the problem by separating CompoundKey class and sensor class.
before I put the CompoundKey class in the sensor class, so Kundera was trying to insert CompoundKey as an attribute

Related

Flink SQL Result field does not match requested type error on LocalDateTime

When i group the below select it getting type maching error. I already try to CAST as TIMESTAMP and try to change POJOs LocalDateTime type. Most of the sample codes converts as Row.class could not find any custom class example.
SELECT name, MIN(price) AS minPrice, MAX(price) AS maxPrice, AVG(price) AS avarage, COUNT(name) as sayi, TUMBLE_START(rowtime, INTERVAL '5' SECOND) AS zaman FROM STOCK GROUP BY TUMBLE(rowtime, INTERVAL '5' SECOND), name
Thrown Error:
Exception in thread "main" org.apache.flink.table.api.TableException: Result field 'zaman' does not match requested type. Requested: GenericType<java.time.LocalDateTime>; Actual: LocalDateTime
Code:
tableEnvironment.registerDataStream("STOCK", messageStream, "name, price, rowtime.rowtime");
Table result = tableEnvironment.sqlQuery(
"SELECT name, MIN(price) AS minPrice, MAX(price) AS maxPrice, AVG(price) AS avarage, COUNT(name) as sayi, TUMBLE_START(rowtime, INTERVAL '5' SECOND) AS zaman FROM STOCK GROUP BY TUMBLE(rowtime, INTERVAL '5' SECOND), name");
result.printSchema();
FlinkKafkaProducer011<String> myProducer = new FlinkKafkaProducer011<String>(kp.getProducerProperties().getProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG),
"STOCKGROUP", new SimpleStringSchema());
myProducer.setWriteTimestampToKafka(true);
DataStream<Tuple2<Boolean, StockGroup>> stream = tableEnvironment.toRetractStream(result, StockGroup.class);
stream.map(x -> x.f1.toString()).addSink(myProducer);
StockGroup.class POJO;
public String name;
public Double minPrice;
public Double maxPrice;
public Double avarage;
public Long sayi;
public LocalDateTime zaman;
Printed Schema;
root
|-- name: STRING
|-- minPrice: DOUBLE
|-- maxPrice: DOUBLE
|-- avarage: DOUBLE
|-- sayi: BIGINT NOT NULL
|-- zaman: TIMESTAMP(3) *ROWTIME*

Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)

Below is my script
CREATE TABLE alrashed.tbl_alerts_details (
alert_id int,
action_required int,
alert_agent_id int,
alert_agent_type_id int,
alert_agent_type_name text,
alert_definer_desc text,
alert_definer_name text,
alert_source text,
alert_state text,
col_1 text,
col_2 text,
col_3 text,
col_4 text,
col_5 text,
current_escalation_level text,
date_part date,
device_id text,
driver map<text, text>,
is_processed int,
is_real_time int,
location map<text, text>,
seq_no int,
severity text,
time_stamp timestamp,
transporter map<text, text>,
transporter_name text,
trip_id int,
updated_on timestamp,
vehicle map<text, text>,
vehicle_type_name text,
PRIMARY KEY (alert_id)
) WITH read_repair_chance = 0.0
AND dclocal_read_repair_chance = 0.1
AND gc_grace_seconds = 864000
AND bloom_filter_fp_chance = 0.01
AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' }
AND comment = ''
AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold' : 32, 'min_threshold' : 4 }
AND compression = { 'chunk_length_in_kb' : 64, 'class' : 'org.apache.cassandra.io.compress.LZ4Compressor' }
AND default_time_to_live = 0
AND speculative_retry = '99PERCENTILE'
AND min_index_interval = 128
AND max_index_interval = 2048
AND crc_check_chance = 1.0;
when I'm running this query I'm getting
Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed) error
Here is my cassandra query in Java:
select
count( * )
from
tbl_alerts_details
where
alert_state = 'ACKNOWLEDGE'
and date_part >= '2017-10-01'
and date_part <= '2017-10-31'
and is_real_time = 1
and alert_agent_type_name = 'VEHICLE' ALLOW FILTERING
This error had made us realise a space issue for us, storing images in Base64 in every row had quickly made the Tombstone problem arise.
As per this post,
Cassandra doesn't just scan through the rows, but also has to
accumulate them in memory while it prepares the response. This can
cause an out-of-memory error on the node if things go too far out, and
if multiple nodes are servicing the request, it may even cause a
multiple failure bringing down the whole cluster. To prevent this from
happening, the service aborts the query if it detects a dangerous
number of tombstones.

Failed to convert value of type [java.lang.String] to required type [java.lang.Long]

I am using Java 8 with Spring 4.3.1.RELEASE, Hibernate 5.2.1.Final and MySQL running on a Wildfly 10 Server.
I have some Java code that was working when I ran my server on a Windows machine. But since I move the exact same code to an MacOS (Sierra) machine, I get the following.
I get the following error:
org.springframework.web.method.annotation.MethodArgumentTypeMismatchException:
Failed to convert value of type [java.lang.String] to required type
[java.lang.Long]; nested exception is java.lang.NumberFormatException:
For input string: "null"
To me it looks like there is an error as a result of a String being assigned to a Long. As you can see in the code below, the avatar column is a byte[] with a #Lob annotation.
More detail:
19:07:23,287 WARN [org.springframework.web.servlet.mvc.support.DefaultHandlerExceptionResolver] (default task-3) Failed to bind request element: org.springframework.web.method.annotation.MethodArgumentTypeMismatchException: Failed to convert value of type [java.lang.String] to required type [java.lang.Long]; nested exception is java.lang.NumberFormatException: For input string: "null"
19:07:23,423 WARN [org.springframework.web.servlet.mvc.support.DefaultHandlerExceptionResolver] (default task-5) Failed to read HTTP message: org.springframework.http.converter.HttpMessageNotReadableException: Could not read document: Failed to decode VALUE_STRING as base64 (MIME-NO-LINEFEEDS): Illegal character ':' (code 0x3a) in base64 content
at [Source: java.io.PushbackInputStream#57905ec8; line: 19, column: 20]
at [Source: java.io.PushbackInputStream#57905ec8; line: 19, column: 13] (through reference chain: com.jobs.spring.domain.Person["avatar"]); nested exception is com.fasterxml.jackson.databind.JsonMappingException: Failed to decode VALUE_STRING as base64 (MIME-NO-LINEFEEDS): Illegal character ':' (code 0x3a) in base64 content
at [Source: java.io.PushbackInputStream#57905ec8; line: 19, column: 20]
at [Source: java.io.PushbackInputStream#57905ec8; line: 19, column: 13] (through reference chain: com.jobs.spring.domain.Person["avatar"])
19:07:29,140 INFO [org.hibernate.hql.internal.QueryTranslatorFactoryInitiator] (default task-6) HHH000397: Using ASTQueryTranslatorFactory
My code is as follows:
Person.java
import javax.persistence.*;
...
#Entity
#Table(name = "person")
#XmlRootElement(name = "person")
public class Person extends AbstractDomain<Long> {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Size(min = 1, max = 45)
#Column(name = "UID")
private String uid;
#Basic(fetch = FetchType.LAZY)
#Lob
#Column(name = "AVATAR", nullable = true)
private byte[] avatar;
#XmlElement
public byte[] getAvatar() {
return avatar;
}
public void setAvatar(byte[] avatar) {
this.avatar = avatar;
}
}
MySQL person table
Any help appreciated.

InvalidRequestException(why:Unknown identifier

when I tried to save a table to cassandra using persist() method and kundera framework, i receive the error:
28462 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO d.d.pieceDAOImpl - start to insert data
28513 [Thread-15-localhostAMQPbolt0-executor[2 2]] INFO c.i.c.c.CassandraClientBase - Returning cql query INSERT INTO "pieces"("width","height","depth","IdPiece") VALUES(10.0,11.0,12.0,'1') .
28543 [Thread-15-localhostAMQPbolt0-executor[2 2]] ERROR c.i.c.c.CassandraClientBase - Error while executing query INSERT INTO "pieces"("width","height","depth","IdPiece") VALUES(10.0,11.0,12.0,'1')
28544 [Thread-15-localhostAMQPbolt0-executor[2 2]] ERROR o.a.s.util - Async loop died!
java.lang.RuntimeException: com.impetus.kundera.KunderaException: com.impetus.kundera.KunderaException: InvalidRequestException(why:Unknown identifier IdPiece)
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:448) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:414) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.daemon.executor$fn__8226$fn__8239$fn__8292.invoke(executor.clj:851) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.0.jar:1.0.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_99]
Caused by: com.impetus.kundera.KunderaException: com.impetus.kundera.KunderaException: InvalidRequestException(why:Unknown identifier IdPiece)
at com.impetus.kundera.persistence.EntityManagerImpl.persist(EntityManagerImpl.java:180) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at database.dao.pieceDAOImpl.insert(pieceDAOImpl.java:54) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at database.controller.DatabaseController.saveSensorEntitie(DatabaseController.java:47) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at connector.bolt.PrinterBolt.execute(PrinterBolt.java:66) ~[project-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
at org.apache.storm.daemon.executor$fn__8226$tuple_action_fn__8228.invoke(executor.clj:731) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__8147.invoke(executor.clj:463) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.disruptor$clojure_handler$reify__7663.onEvent(disruptor.clj:40) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:435) ~[storm-core-1.0.0.jar:1.0.0]
... 6 more
And im sur that my idpiece is the primary key of my table.
my table:
CREATE TABLE mykeyspace.pieces (
idpiece text PRIMARY KEY,
depth double,
height double,
width double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
my class entity
#Entity
#Table(name = "pieces", schema = "mykeyspace#cassandra_pu")
public class PieceEntitie implements Serializable{
#Id
private String IdPiece;
#Column
private double width;
#Column
private double height;
#Column
private double depth;
how can i resolve this problem ?
thank you in advance
In Cassandra quoted identifiers are case-sensitive.
So your column name "IdPiece" is different then actual column idpiece in your table pieces
Kundera uses quoted identifiers in the generated query.
INSERT INTO "pieces"("width","height","depth","IdPiece") VALUES(10.0,11.0,12.0,'1')
In cassandra, quoted identifiers are case-sensitive. So there is no such column "IdPiece" in your table.
Solution is to rename your field to idpiece in entity class

JPA, Mysql Blob returns data too long

I've got some byte[] fields in my entities, e.g.:
#Entity
public class ServicePicture implements Serializable {
private static final long serialVersionUID = 2877629751219730559L;
// seam-gen attributes (you should probably edit these)
#Id
#GeneratedValue
private Long id;
private String description;
#Lob
#Basic(fetch = FetchType.LAZY)
private byte[] picture;
On my database schema the field is set to BLOB so this should be fine. Anyway: Everytime when I try to insert a picture or pdf - nothing bigger than 1mb, I only recieve this
16:52:27,327 WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: 22001
16:52:27,327 ERROR [JDBCExceptionReporter] Data truncation: Data too long for column 'picture' at row 1
16:52:27,328 ERROR [STDERR] javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not insert: [de.ac.dmg.productfinder.entity.ServicePicture]
16:52:27,328 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:629)
16:52:27,328 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.persist(AbstractEntityManagerImpl.java:218)
16:52:27,328 ERROR [STDERR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
16:52:27,328 ERROR [STDERR] at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
16:52:27,328 ERROR [STDERR] at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
16:52:27,328 ERROR [STDERR] at java.lang.reflect.Method.invoke(Unknown Source)
16:52:27,328 ERROR [STDERR] at org.jboss.seam.persistence.EntityManagerInvocationHandler.invoke(EntityManagerInvocationHandler.java:46)
16:52:27,328 ERROR [STDERR] at $Proxy142.persist(Unknown Source)
I've checked my MySQL cnf and the max_allowedparam is set to 16M - am I missing something?
It all depends on the column type used for the picture column. Depending on your needs, use a:
TINYBLOB: maximum length of 255 bytes
BLOB: maximum length of 65,535 bytes
MEDIUMBLOB: maximum length of 16,777,215 bytes
LONGBLOB: maximum length of 4,294,967,295 bytes
Note that if you generate your table from the JPA annotations, you can "control" the type MySQL will use by specifying the length attribute of the Column, for example:
#Lob #Basic(fetch = FetchType.LAZY)
#Column(length=100000)
private byte[] picture;
Depending on the length, you'll get:
0 < length <= 255 --> `TINYBLOB`
255 < length <= 65535 --> `BLOB`
65535 < length <= 16777215 --> `MEDIUMBLOB`
16777215 < length <= 2³¹-1 --> `LONGBLOB`
I use below and it works for images
#Lob
#Column(name = "file", columnDefinition = "LONGBLOB")
private byte[] file;
In our case we had to use the following syntax:
public class CcpArchive
{
...
private byte[] ccpImage;
...
#Lob
#Column(nullable = false, name = "CCP_IMAGE", columnDefinition="BINARY(500000)")
public byte[] getCcpImage()
{
return ccpImage;
}
...
}

Categories

Resources