I'm in charge of a task which I believe should be simple, but as I never did it before I have some trouble with it. I have succesfully created an EJB 3 project using EclipseLink which will call a number of stored procedures from an Oracle database. I've configured the datasource correctly, I can connect and execute simple stored procedures and functions (without parameters and returning a cursor); however, I am currently unable to execute stored procedures with parameters.
I'm using the EclipseLink wiki as a reference http://wiki.eclipse.org/Using_Basic_Query_API_(ELUG) .
The code is:
StoredProcedureCall call = new StoredProcedureCall();
call.setProcedureName("p_environment.startSession");
List<String[]> args = new ArrayList<String[]>();
args.add(new String[] { "user", "ae01403" });
args.add(new String[] { "application", "app_code" });
args.add(new String[] { "locale", "it_IT" });
for (String[] pair : args) {
call.addNamedArgumentValue(pair[0], pair[1]);
}
DataReadQuery query = new DataReadQuery(call);
for (String[] pair : args) {
query.addArgument(pair[0]);
}
Based on database documentation, as I have no access to the database itself, the procedure takes 3 VARCHAR IN parameters of the specified name. I then call the executeQuery method on the active session, but receive the "Wrong type or number of arguments" error. What am I doing wrong? Any help is appreciated.
EDIT: The stored procedure signature, as per documentation, is:
p_environment.startSession(as_user IN VARCHAR2,
as_application IN VARCHAR2,
as_locale IN VARCHAR2);
Many thanks!
In your code, call.addNamedArgumentValue(pair[0], pair[1]); sounds strange.
Don't you need to do this instead ?
call.addNamedArgument(procedureParameterName, argumentFieldName, argumentType);
It enables the mapping between your stored procedure parameters and the args you want to use.
For example, for you, it gives :
call.addNamedArgument(procedureUserParameterName, "user", String.class);
call.addNamedArgument(procedureUserParameterName, "application", String.class);
call.addNamedArgument(procedureUserParameterName, "locale", String.class);
You keep this :
DataReadQuery query = new DataReadQuery(call);
for (String[] pair : args) {
query.addArgument(pair[0]);
}
Then to call your Stored Procedure :
Session session = jpaEntityManager.getActiveSession();
List args = new ArrayList();
args.add(“ae01403”);
args.add(“app_code”);
args.add(“it_IT”);
List results = (List) session.executeQuery(query, args);
Related
I'm trying to get data from an Oracle stored procedure. The problem is that in our database there is a function and a procedure with the same name and same parameters.
When I try to call it:
#Autowired
public void setDataSource (#Qualifier("dataSource") DataSource dataSource) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.setResultsMapCaseInsensitive(true);
this.functionGetSomeCode = new SimpleJdbcCall(jdbcTemplate)
.declareParameters(new SqlOutParameter("RETURN", OracleTypes.NUMBER))
.withFunctionName("get_some_code").withSchemaName("XXX").withCatalogName("some_pkg");
}
public Integer getSomeCode (String incoming) {
SqlParameterSource incomingParameters = new MapSqlParameterSource().addValue("incoming", incoming);
return functionGetSomeCode.executeFunction(Integer.class, incomingParameters);
}
I get an exception:
springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - multiple procedures/functions/signatures
Is there a way to handle this situation without asking the DBA to rename the function / procedure to something different?
I've been able to call functions and procedures that have the same name but it doesn't always work.
In your example it looks like you aren't declaring the input parameters. Try declaring the input and output parameters with types as closely matched to the package declaration as possible. If that still doesn't work you can try turning off ProcedureColumnMetaDataAccess but be sure to test.
Here is an example:
protected SimpleJdbcCall buildJdbcCall(JdbcTemplate jdbcTemplate) {
SimpleJdbcCall call = new SimpleJdbcCall(jdbcTemplate)
.withSchemaName(schema)
.withCatalogName(catalog)
.withFunctionName(functionName)
// can use withProcedureName(procedureName) for procedures
//.withReturnValue()
// .withoutProcedureColumnMetaDataAccess() // may need this
.declareParameters(buildSqlParameters());
return call;
}
public SqlParameter[] buildSqlParameters() {
return new SqlParameter[]{
new SqlParameter("p_id", Types.VARCHAR),
new SqlParameter("p_office_id", Types.VARCHAR),
new SqlOutParameter("l_clob", Types.CLOB)
};
}
In Case, you do not have return variable, the below code can help. I was facing the same issue in which my code was working without declaring the out params, as functions doesnot have out params. I resolved this issue by declaring the out param as return. Code is as follows.
SimpleJdbcCall jdbcCall = new SimpleJdbcCall(jdbcTemplate.getDataSource()).withCatalogName("PACKAGE_NAME")
.withFunctionName("MY_FUNCTION_NAME").withoutProcedureColumnMetaDataAccess().declareParameters(
new SqlOutParameter("RETURN", Types.TIMESTAMP),
new SqlParameter("XYX_Y", Types.DATE),
new SqlParameter("HH_HDU", Types.VARCHAR)
);
jdbcCall.compile();
MapSqlParameterSource in = new MapSqlParameterSource();
in.addValue("XYX_Y", myDate);
in.addValue("HH_HDU", "PQR");
java.sql.Timestamp date = jdbcCall.executeFunction(java.sql.Timestamp.class, in);
I was also facing the same issue. I have 2 processors with same name with one different params “lang_code_in”. Looks like the issue is due to this only and it is coming because of Spring/Spring boot application. So I added after ".withoutProcedureColumnMetaDataAccess()."
final SimpleJdbcCall simpleJdbcCall = new SimpleJdbcCall(dataSource).withCatalogName(catalogName)
.withProcedureName(procedureName)
.withoutProcedureColumnMetaDataAccess().
.declareParameters(buildPullOrderSqlParameters());
But after this it starts giving below error:
org.springframework.jdbc.BadSqlGrammarException: CallableStatementCallback; bad SQL grammar [{call NPIADM.CCM_JIGSAW_TEMPLATE_PKG.GET_UPC_XML_CONTENT(?, ?, ?)}]; nested exception is java.sql.SQLException: ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'GET_UPC_XML_CONTENT'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
Then I added all the in and out params declareParameters() object like below and it started working fine. Below code have P_ORDER_NUMBER_I is IN params and p_err_msg_o is out params
public SqlParameter[] buildPullOrderSqlParameters() {
return new SqlParameter[]{
new SqlParameter("P_ORDER_NUMBER_I", Types.VARCHAR),
new SqlOutParameter("p_err_msg_o", Types.VARCHAR)
};
}
I'm trying to call a stored JavaScript function from the MongoDB Java driver.
I have been following this guide to store the function on the DB server and I'm able to call the function from the mongo shell and have the result returned.
However I cannot figure out how to call the same function in Java?
According to this http://api.mongodb.org/java/current/com/mongodb/DB.html#doEval-java.lang.String-java.lang.Object...- there's a method called doEval
I have also tried to use it with this method:
public static String callFunction() {
try (MongoClient client = new MongoClient("localhost")) {
com.mongodb.DB db = client.getDB("TestDB");
return db.doEval("echoFunction", 3).toString();
}
}
But when I call the method this is what I get:
{ "retval" : { "$code" : "function (x) {\n return x;\n}"} , "ok" : 1.0}
and I would expect to get the number 3 back in this case.
Another problem with the above code is that the method client.getDB() is deprecated. As I understand it the new method to call is client.getDatabase() and it returns a MongoDatabase object, but according to the API there is no method to execute a function.
So my question is: Is it possible to execute a stored JavaScript function on the database server from Java and get back the result of that function? And if it is possible, I would appreciate some help on how to do it?
Thank you.
Edit:
According to a comment on Calling server js function on mongodb from java:
"It seems like getNextSequence is a function written in the mongo
javascript shell. Neither the database (mongod) nor the Java side
knows this function exists and neither is able to interprete the
Javascript code the function contains. You will have to reimplement it
in Java. "
The function I'm trying to implement is a bit more complex than the example above - it's supposed to return a collection of documents and that does not seems to be working using the db.doEval method.
So I guess the comment is correct?
You can do all this with java driver.
MongoClient mongoClient = new MongoClient();
MongoDatabase mdb = mongoClient.getDatabase("TestDB");
/* run this <code snippet> in bootstrap */
BsonDocument echoFunction = new BsonDocument("value",
new BsonJavaScript("function(x1) { return x1; }"));
BsonDocument myAddFunction = new BsonDocument("value",
new BsonJavaScript("function (x, y){ return x + y; }"));
mdb.getCollection("system.js").updateOne(
new Document("_id", "echoFunction"),
new Document("$set", echoFunction),
new UpdateOptions().upsert(true));
mdb.getCollection("system.js").updateOne(
new Document("_id", "myAddFunction"),
new Document("$set", myAddFunction),
new UpdateOptions().upsert(true));
mdb.runCommand(new Document("$eval", "db.loadServerScripts()"));
/* end </code snippet> */
Document doc1 = mdb.runCommand(new Document("$eval", "echoFunction(5)"));
System.out.println(doc1);
The result is also:
Document{{retval=5.0, ok=1.0}}
You should do this instead:
return db.doEval("echoFunction(3)").toString();
If you use just function name in eval you only refer to JavaScript variable on server side storing code of function. It doesn't execute it. When you use parentheses you request to actually execute a function. If you need to send something more complex than a number I would advise to use JSON serializer.
I resolved the same issue in the following way:
I run a command in mongoShell to create my stored JavaScript functions:
db.system.js.save(
{
_id: "echoFunction" ,
value : function(x1) { return x1; }
}
)
db.system.js.save(
{
_id : "myAddFunction" ,
value : function (x, y){ return x + y; }
}
);
db.system.js.save(
{
_id: "fullFillCollumns" ,
value : function () {
for (i = 0; i < 2000; i++) {
db.numbers.save({num:i}); } }
}
);
To execute this functions from MongoDB Java Driver:
MongoClient mongoClient = new MongoClient();
MongoDatabase db = mongoClient.getDatabase("databaseName");
db.runCommand(new Document("$eval", "fullFillCollumns()"));
Document doc1 = db.runCommand(new Document("$eval", "echoFunction(5)"));
System.out.println(doc1);
Document doc2 = db.runCommand(new Document("$eval", "myAddFunction(5,8)"));
System.out.println(doc2);
I see that the collection numbers were created and filled with values. In the IntellijIdea console I see:
Document{{retval=5.0, ok=1.0}}
Document{{retval=13.0, ok=1.0}}
I would like to update a specific collection in MongoDb via Spark in Java.
I am using the MongoDB Connector for Hadoop to retrieve and save information from Apache Spark to MongoDb in Java.
After following Sampo Niskanen's excellent post regarding retrieving and saving collections to MongoDb via Spark, I got stuck with updating collections.
MongoOutputFormat.java includes a constructor taking String[] updateKeys, which I am guessing is referring to a possible list of keys to compare on existing collections and perform an update. However, using Spark's saveAsNewApiHadoopFile() method with parameter MongoOutputFormat.class, I am wondering how to use that update constructor.
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
Prior to this, MongoUpdateWritable.java was being used to perform collection updates. From examples I've seen on Hadoop, this is normally set on mongo.job.output.value, maybe like this in Spark:
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, MongoUpdateWritable.class, MongoOutputFormat.class, config);
However, I'm still wondering how to specify the update keys in MongoUpdateWritable.java.
Admittedly, as a hacky way, I've set the "_id" of the object as my document's KeyValue so that when a save is performed, the collection will overwrite the documents having the same KeyValue as _id.
JavaPairRDD<BSONObject,?> analyticsResult; //JavaPairRdd of (mongoObject,result)
JavaPairRDD<Object, BSONObject> save = analyticsResult.mapToPair(s -> {
BSONObject o = (BSONObject) s._1;
//for all keys, set _id to key:value_
String id = "";
for (String key : o.keySet()){
id += key + ":" + (String) o.get(key) + "_";
}
o.put("_id", id);
o.put("result", s._2);
return new Tuple2<>(null, o);
});
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
I would like to perform the mongodb collection update via Spark using MongoOutputFormat or MongoUpdateWritable or Configuration, ideally using the saveAsNewAPIHadoopFile() method. Is it possible? If not, is there any other way that does not involve specifically setting the _id to the key values I want to update on?
I tried several combination of config.set("mongo.job.output.value","....") and several combination of
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
and none of them worked.
I made it to work by using MongoUpdateWritable class as output of my map method:
items.map(row => {
val mongo_id = new ObjectId(row("id").toString)
val query = new BasicBSONObject()
query.append("_id", mongo_id)
val update = new BasicBSONObject()
update.append("$set", new BasicBSONObject().append("field_name", row("new_value")))
val muw = new MongoUpdateWritable(query,update,false,true)
(null, muw)
})
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
The raw query executed in mongo is then something like this:
2014-11-09T13:32:11.609-0800 [conn438] update db.users query: { _id: ObjectId('5436edd3e4b051de6a505af9') } update: { $set: { value: 10 } } nMatched:1 nModified:0 keyUpdates:0 numYields:0 locks(micros) w:24 3ms
Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.
I'm using the neo4j 1.9.M01 version with the java-rest-binding 1.8.M07, and I have a problem with this code that aims to get a node from a neo4j database with the property "URL" that is "ARREL", using the Query language via rest. The problems seems to happens only inside a transaction, throwing an exception, but otherwise works well :
RestGraphDatabase graphDb = new RestGraphDatabase("http://localhost:7474/db/data");
RestCypherQueryEngine queryEngine = new RestCypherQueryEngine(graphDb.getRestAPI());
Node nodearrel = null;
Transaction tx0 = gds.beginTx();
try{
final String queryStringarrel = ("START n=node(*) WHERE n.URL =~{URL} RETURN n");
QueryResult<Map<String, Object>> retornar = queryEngine.query(queryStringarrel, MapUtil.map("URL","ARREL"));
for (Map<String,Object> row : retornar)
{
nodearrel = (Node)row.get("n");
System.out.println("Arrel: "+nodearrel.getProperty("URL")+" id : "+nodearrel.getId());
}
tx0.success();
}
(...)
But an exception happens: *exception tx0: Error reading as JSON ''
* every execution at the line that returns the QueryResult object.
I also have tried to do it with the ExecutionEngine (between a transaction):
ExecutionEngine engine = new ExecutionEngine( graphDb );
String ARREL = "ARREL";
ExecutionResult result = engine.execute("START n=node(*) WHERE n.URL =~{"+ARREL+"} RETURN n");
Iterator<Node> n_column = result.columnAs("n");
Node arrelat = (Node) n_column.next();
for ( Node node : IteratorUtil.asIterable( n_column ) )
(...)
But it also fails at the *n_column.next()* returning a null object that throws an exception.
The problem is that I need to use the transactions to optimize the queries due if not it take too much time processing all the queries that I need to do. Should I try to join several operations to the query, to avoid using the transactions?
try to add single quotes at:
START n=node(*) WHERE n.URL =~ '{URL}' RETURN n
Can you update your java-rest-binding to the latest version (1.8) ? In between we had a version that automatically applied REST-batch-operations to places with transaction semantics.
So the transactions you see are not real transactions but just recording your operations to be executed as batch-rest-operations on tx.success/finish
Execute the queries within the transaction, but only access the results after the tx is finished. Then your results will be there.
This is for instance useful to send many cypher queries in one go to the server and have the results available all in one go afterwards.
And yes #ulkas use parameters but not like that:
START n=node(*) WHERE n.URL =~ {URL} RETURN n
params: { "URL" : "http://your.url" }
No quotes neccessary when using params, just like SQL prepared statements.