Mapping java.long to oracle.Number(14) - java

I have db column whose datatype is Number (15) and i have the corresponding field in java classes as long. The question is how would i map it using java.sql.Types.
would Types.BIGINT work?
Or shall i use something else?
P.S:
I can't afford to change the datatype within java class and within DB.

From this link it says that java.sql.Types.BIGINT should be used for long in Java to Number in SQL (Oracle).
Attaching screenshot of the table in case the link ever dies.

A good place to find reliable size mappings between Java and Oracle Types is in the Hibernate ORM tool. Documented in the code here, Hibernate uses an Oracle NUMBER(19,0) to represent a java.sql.Types.BIGINT which should map to a long primitave

I always use wrapper type, because wrapper types can be express null values.
In this case I will use Long wrapper type.

I had a similar problem where I couldn't modify the Java Type or the Database Type. In my situation I needed to execute a native SQL query (to be able to utilize Oracle's Recursive query abilities) and map the result set to a non-managed entity (essentially a simple pojo class).
I found a combination of addScalar and setResultTransformer worked wonders.
hibernateSes.createSQLQuery("SELECT \n"
+ " c.notify_state_id as \"notifyStateId\", \n"
+ " c.parent_id as \"parentId\",\n"
+ " c.source_table as \"sourceTbl\", \n"
+ " c.source_id as \"sourceId\", \n"
+ " c.msg_type as \"msgType\", \n"
+ " c.last_updt_dtm as \"lastUpdatedDateAndTime\"\n"
+ " FROM my_state c\n"
+ "LEFT JOIN my_state p ON p.notify_state_id = c.parent_id\n"
+ "START WITH c.notify_state_id = :stateId\n"
+ "CONNECT BY PRIOR c.notify_state_id = c.parent_id")
.addScalar("notifyStateId", Hibernate.LONG)
.addScalar("parentId", Hibernate.LONG)
.addScalar("sourceTbl",Hibernate.STRING)
.addScalar("sourceId",Hibernate.STRING)
.addScalar("msgType",Hibernate.STRING)
.addScalar("lastUpdatedDateAndTime", Hibernate.DATE)
.setParameter("stateId", notifyStateId)
.setResultTransformer(Transformers.aliasToBean(MyState.class))
.list();
Where notifyStateId, parentId, sourceTbl, sourceId, msgType, and lastUpdatedDateAndTime are all properties of MyState.
Without the addScalar's, I would get a java.lang.IllegalArgumentException: argument type mismatch because Hibernate was turning Oracle's Number type into a BigDecimal but notifyStateId and parentId are Long types on MyState.

Related

Object db passing an Object as parameter

I have been using JSF, JPA and MySQL with EclipseLink for 5 years. I found that I want to shift to Object db as it is very fast specially with a very large dataset. During migration, I found this error.
In JPA with EclipseLink, I passed objects as parameters. But in Object DB, I need to pass the id of objects to get the results. I have to change this in several places. Can enyone help to overcome this issue.
THis code worked fine with EclipseLink and MySQL. Here I pass the object"salesRep" as the parameter.
String j = "select b from "
+ " Bill b "
+ " where b.billCategory=:cat "
+ " and b.billType=:type "
+ " and b.salesRep=:rep ";
Map m = new HashMap();
m.put("cat", BillCategory.Loading);
m.put("type", BillType.Billed_Bill);
m.put("rep", getWebUserController().getLoggedUser());
I have to chage like this to make it work in ObjectDB.Here I have to pass the id (type long) of the object"salesRep" as the parameter.
String j = "select b from "
+ " Bill b "
+ " where b.billCategory=:cat "
+ " and b.billType=:type "
+ " and b.salesRep.id=:rep ";
Map m = new HashMap();
m.put("cat", BillCategory.Loading);
m.put("type", BillType.Billed_Bill);
m.put("rep", getWebUserController().getLoggedUser().getId());
There is a difference between EclipseLink and ObjectDB in handling detached entity objects. The default behaviour of ObjectDB is to follow the JPA specification and stop loading referenced objects by field access (transparent navigation) once an object becomes detached. EclipseLink does not treat detached objects this way.
This could make a difference in situations such as in a JSF application, where an object becomes detached before loading all necessary referenced data.
One solution (the JPA portable way) is to make sure that all the required data is loaded before objects become detached.
Another possible solution is to enable loading referenced objects by access (transparent navigation) for detached objects, by setting the objectdb.temp.no-detach system property. See #3 in this forum thread.

Converting Postgres Query Plan into xml and store in file using Eclipse JDBC

As per title, I'm current using JDBC on eclipse to connect to my PostgreSQL.
I have been running EXPLAIN ANALYZE statements to retrieve query plans from postgres itself. However, is it possible to store these query plan in a structure that resemble a tree? e.g main branch and sub branch etc. I read somewhere that it is a good idea to store it into xml document first and manipulate it from there.
Is there an API in Java for me to achieve this? Thanks!
try using format xml eg
t=# explain (analyze, format xml) select * from pg_database join pg_class on true;
QUERY PLAN
----------------------------------------------------------------
<explain xmlns="http://www.postgresql.org/2009/explain"> +
<Query> +
<Plan> +
<Node-Type>Nested Loop</Node-Type> +
<Join-Type>Inner</Join-Type> +
<Startup-Cost>0.00</Startup-Cost> +
<Total-Cost>23.66</Total-Cost> +
<Plan-Rows>722</Plan-Rows> +
<Plan-Width>457</Plan-Width> +
<Actual-Startup-Time>0.026</Actual-Startup-Time> +
<Actual-Total-Time>3.275</Actual-Total-Time> +
<Actual-Rows>5236</Actual-Rows> +
<Actual-Loops>1</Actual-Loops> +
<Plans> +
<Plan> +
<Node-Type>Seq Scan</Node-Type> +
<Parent-Relationship>Outer</Parent-Relationship> +
...and so on

Lucene 6 - recommended way to store numeric fields with term vocabulary

In Lucene 6, LongField and IntField have been renamed to LegacyLongField and LegacyIntField, deprecated with a JavaDoc suggestion to use LongPoint and IntPoint classes instead.
However, it seems impossible to build a term vocabulary (=enumerate all distinct values) of these XPoint fields. Lucene mailing list entry confirms it
PointFields are different than conventional inverted fields, so they also don't show up in fields(). You cannot get a term dictionary from them.
As a third option, one can add a field of class NumericDocValuesField, which as far as I know, also doesn't provide a way of building term vocabulary.
Is there a non-deprecated way of indexing a numeric field in Lucene 6, given the requirement to build a term vocabulary?
In my case I just duplicated the field once as LongPoint and once as a stored non-indexed field both fields with the same name.
in my case it is roughly
doc.add(new NumericDocValuesField("ts", timestamp.toEpochMilli()));
doc.add(new LongPoint("ts", timestamp.toEpochMilli()));
doc.add(new StoredField("ts", timestamp.toEpochMilli()));
It is a bit ugly, but think of it as adding an index to the stored field.
These field types can use the same name without interfering.
The DocValues for document age based scoring and the LongPoint for range queries.
I had the same issue and finally found a solution for my use case - I'm indexing, not storing, a LongPoint:
doc.add(new LongPoint("time",timeMsec));
My first idea was to create the query like this:
Query query = parser.parse("time:[10003 TO 10003]");
System.err.println( "Searching for: " + query + " (" + query.getClass() + ")" );
But this will not return ANY document, at least not with the StandardAnalyzer and the default QueryParser :-(
The printout is: "Searching for: time:[10003 TO 10003] (class org.apache.lucene.search.TermRangeQuery)"
What works, however, is creating the query with LoingPoint.newRangeQuery():
Query query = LongPoint.newRangeQuery("time", 10003, 10003);
System.err.println( "Searching for: " + query + " (" + query.getClass() + ")" );
This prints: "Searching for: time:[10003 TO 10003] (class org.apache.lucene.document.LongPoint$1)". So the standard QueryParser is creating a TermRangeQuery instead of a LoingPoint range query. I'm new to Lucene so don't understand the details here, but it would be nice for the QuerParser to support LongPoint seamlessly...

Which kind of objects are supported by Siddhi for the "object" attribute type?

I am performing some experimentations for a prototype using Siddhi as a CEP engine, and would like to know if the input streams only support flat event data or can also support a JSON-like data hierarchy for queries.
Siddhi's documentation refers to an object type for attributes, but I could not find anyhere what this type refers to.
In the code samples provided in the source repository, this attribute type is also never used.
Extending one of the queries written in these examples, I would like to be able to do something like:
String executionPlan = ""
+ "define stream cseEventStream (symbol string, price float, volume long, data object); "
+ " "
+ "#info(name = 'query1') "
+ "from cseEventStream[volume < 150 and data.myKey == 'myValue'] "
+ "select symbol,price "
+ "insert into outputStream ;";
Is any kind of JSON-like data supported by Siddhi ? If yes, what Java object types should be passed to the InputHandler ?
It accepts java.lang.Object instances. So you can pass any java object there. But those objects are pass-through only (Siddhi engine just passes them along with the event) and you won't be able to do any modifications/processing to those objects unless you write some custom extension.
If you want to process json inputs, use WSO2 CEP product. You will be able to define mappings and disassemble the json input to some primitive types like string, int, float etc that the Siddhi engine can process.
In new siddhi 4.x everything is a extension to siddhi. And it has set of mapper extensions which can be used even if you are using siddhi as a library. With the use of source extension and mapper extension you will not have to write your own code to receive and map data.
The lates WSO2 analytics offering, WSO2 SP is based on Siddhi 4.x

MySQL enum datatype access in clojure

I am trying to write a simple application which reads the a database and produces a set of functions with which to access it; so far so good. Now, what I have come across is that some of the columns in my database are defined as MySQL enum types (e.g. ENUM('red','green','violet')) and I would like to validate the stuff I send to the database rather than receive an error from the driver when an unacceptable value is given, so I was wondering if there is a way to retrieve the possible values for the enum from within clojure.
I am using [clojure.java.jdbc "0.3.0-alpha5"] and [mysql/mysql-connector-java "5.1.25"]. In order to get the metadata for the table I am currently using java.sql.DatabaseMetaData, but trying .getPseudoColumns just gives me nil every time.
Turns out there is no straight forward way to do this using libraries. My own solution is:
(defn- parse-enum
"Parses an enum string and returns it's components"
[enum-str]
; "enum('temp','active','canceled','deleted')"
(map (comp keyword #(.replace % "'" ""))
(-> enum-str
(.replaceFirst "^[^\\(]+\\(([^\\)]+)\\)$" "$1")
(.split "'?,'?"))))
(defn get-enum-value
"Returns the values for an enum in a table.column"
[table column]
(jdbc/with-connection db
(jdbc/with-query-results rs
[(str "show columns from " table " where field = ?") column]
((comp set parse-enum :type first) rs))))

Categories

Resources