Disable SQL escape in Apache Ignite - java

I've tried
CacheConfiguration<?, ?> cacheCfg = new CacheConfiguration<>(cacheTemplateName).setSqlSchema("PUBLIC"); //create table can only be executed on public schema
cacheCfg.setSqlEscapeAll(false); //otherwise ignite tries to quote after we've quoted and there are cases we have to quote before ignite gets it
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
ignite.addCacheConfiguration(cacheCfg); //required to register cacheTemplateName as a template, see WITH section of https://apacheignite-sql.readme.io/docs/create-table
Unfortunately nothing I try seems to work.
I've debugged through and isSqlEscapeAll() always returned true.
FYI in the CREATE TABLE statement I've set TEMPLATE=MyTPLName.
Is it possible to disable this behaviour? My queries are already appropriately quoted.

This flag doesn't work for dynamic caches since it could cause some unclearness with table names, which were described in this thread on Ignite dev list: http://apache-ignite-developers.2346864.n4.nabble.com/Remove-deprecate-CacheConfiguration-sqlEscapeAll-property-td17966.html
By the way, what is the problem you want to solve using this flag?

Related

Spring R2DBC + SQL Server: procedures query

I am required to execute a stored procedure in a SQL server to fetch some data, and since I will later save the data into a Mongo and this one is with ReactiveMongoTemplate and so on, I introduced Spring R2DBC.
implementation("org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE")
implementation("io.r2dbc:r2dbc-mssql:0.8.1.RELEASE")
I see that I can do SELECT and INSERT and so on with R2DBC, but is it possible to EXEC prod_name? I tried it and it hangs forever and then the test terminates, without success but neither failure. The last line of log is:
io.r2dbc.mssql.QUERY - Executing query: EXEC "SCHEMA"."MY_PROCEDURE"
The code is like:
public Flux<Coupon> selectWithProcedure() {
return databaseClient
.execute("EXEC \"SCHEMA\".\"MY_PROCEDURE\" ")
.as(Coupon.class)
.fetch().all()
.doOnNext(coupon -> {
coupon.setCouponStatusRefFromId(coupon.getCouponStatusRefId());
});
}
And it seems that no data is retrieved.
If I test some other methods with simple queries like SELECT... it works. But the problem is, DBAs do not allow my app to read table data, instead, they create a procedure for me. If this query is not possible, I must go with traditional JPA way and going reactive at Mongo side has lost its sense.
Well. I just saw this:
https://github.com/r2dbc/r2dbc-mssql, version 0.8.1:
Next steps:
Execution of stored procedures
Add support for TVP and UDTs
And:
https://r2dbc.io/2019/05/13/r2dbc-0-8-milestone-8-released
We already have a few tickets lined up for the next milestone, and we know that they will require further SPI modifications:
Support for Auto-Commit
Connection Validation
Support for Stored Procedures

Why Spark dataframe cache doesn't work here

I just wrote a toy class to test Spark dataframe (actually Dataset since I'm using Java).
Dataset<Row> ds = spark.sql("select id,name,gender from test2.dummy where dt='2018-12-12'");
ds = ds.withColumn("dt", lit("2018-12-17"));
ds.cache();
ds.write().mode(SaveMode.Append).insertInto("test2.dummy");
//
System.out.println(ds.count());
According to my understanding, there're 2 actions, "insertInto" and "count".
I debug the code step by step, when running "insertInto", I see several lines of:
19/01/21 20:14:56 INFO FileScanRDD: Reading File path: hdfs://ip:9000/root/hive/warehouse/test2.db/dummy/dt=2018-12-12/000000_0, range: 0-451, partition values: [2018-12-12]
When running "count", I still see similar logs:
19/01/21 20:15:26 INFO FileScanRDD: Reading File path: hdfs://ip:9000/root/hive/warehouse/test2.db/dummy/dt=2018-12-12/000000_0, range: 0-451, partition values: [2018-12-12]
I have 2 questions:
1) When there're 2 actions on same dataframe like above, if I don't call ds.cache or ds.persist explicitly, will the 2nd action always causes the re-executing of the sql query?
2) If I understand the log correctly, both actions trigger hdfs file reading, does that mean the ds.cache() actually doesn't work here? If so, why it doesn't work here?
Many thanks.
It's because you append into the table where ds is created from, so ds needs to be recomputed because the underlying data changed. In such cases, spark invalidates the cache. If you read e.g. this Jira (https://issues.apache.org/jira/browse/SPARK-24596):
When invalidating a cache, we invalid other caches dependent on this
cache to ensure cached data is up to date. For example, when the
underlying table has been modified or the table has been dropped
itself, all caches that use this table should be invalidated or
refreshed.
Try to run the ds.count before inserting into the table.
I found that the other answer doesn't work. What I had to do was break lineage such that the df I was writing does not know that one of its source is the table I am writing to. To break lineage, I created a copy df using
copy_of_df = sql_context.createDataframe(df.rdd)

Cassandra Java datastax 2.1.8 Cannot connect to keyspace with quotes

I have a simple piece of code for removing data from cassandra 2
Cluster myCluster = Cluster.builder().addContactPoint(myhost).withPort(myport).build();
Session session = myCluster.connect(keyspaceName);
session.excecute(deleteStatement); -- it is just simple Delete.Where
So basically when I try to do something on (for example) keyspaceName = "test"
it will easily excecute my delete statement, but if I try the same thing for (for example) keyspace = "\"DONT_WORK\"" (since I have a keyspace name in quotes in cassandra) it won't work, and will throw
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:16660 (com.datastax.driver.core.ConnectionException: [localhost/127.0.0.1:16660] Pool is shutdown))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
I need help please
PS. I even used Metadata.quote() static method from datastax library - still ain't working.
You should not need to quote the keyspace name for connecting. Quoting is important to protect case sensitivity in the context of a CQL string, but you do not need to protect it if you are passing the keyspace name as an API parameter.
Ok, there is no need for further investigation of this problem. The issue was that I accidentally used 2.1.8 datastax library on cassandra at version 2.0.8. I have to stop using numeric keyborad. Simple mistake but sure made quite a fuss.

MyBatis: how to bypass a local cache and directly hit the DB on specific select

I use MyBatis 3.1.
I have two use cases when I need to bypass MyBatis local cache and directly hit the DB.
Since MyBatis configuration file only have global settings, it is not applicable to my case, because I need it as an exception, not as a default. Attributes of MyBatis <select> XML statement do not seem to include this option.
Use case 1: 'select sysdate from dual'.
MyBatis caching causes this one to always return the same value within a MyBatis session. This causes an issue in my integration test, when I try to replicate a situation of an outdated entry.
My workaround was just to use a plain JDBC call.
Use case 2: 'select' from one thread does not always see the value written by another thread.
Thread 1:
SomeObject stored = dao.insertSomeObject(obj);
runInAnotherThread(stored.getId());
//complete and commit
Thread 2:
//'id' received as an argument provided to 'runInAnotherThread(...)'
SomeObject stored = dao.findById(id);
int count = 0;
while(stored == null && count < 300) {
++count;
Thread.sleep(1000);
stored = dao.findById(id);
}
if (stored == null) {
throw new MyException("There is no SomeObject with id="+id);
}
I occasionally receive MyException errors on a server, but can't reproduce on my local machine. In all cases the object is always in the DB. So I guess the error depends on whether the stored object was in MyBatis local cache at the first time, and waiting for 5 minutes does not help, since it never checks the actual DB.
So my question is how to solve the above use cases within MyBatis without falling back to the plain JDBC?
Being able just to somehow signal MyBatis not to use a cached value in a specific call (the best) or in all calls to a specific query would be the preferred option, but I will consider any workaround as well.
I don't know a way to bypass local cache but there are two options how to achieve what you need.
The first option is to set flushCache="true" on select. This will clear the cache after statement execution so next query will hit database.
<select id="getCurrentDate" resultType="date" flushCache="true">
SELECT SYSDATE FROM DUAL
</select>
Another option is to use STATEMENT level local cache. By default local cache is used during SESSION (which is typically translates to transaction). This is specified by localCacheScope option and is set per session factory. So this will affect all queries using this mybatis session factory.
Let me summarize.
The solution from the previous answer, 'flushCache="true"' option on the query, works and solves both use cases. It will flush cache after every such 'select', so the next 'select' statement will hit the DB. Although it works after the 'select' statement is executed, it's OK since the cache is empty anyway before the first 'select'.
Another solution is to start a new session. I use Spring, so it's enough to mark a method with #Transactional(propagation = Propagation.REQUIRES_NEW). Since MyBatis session is tied to Spring transaction, this will cause to create another MyBatis session with fresh cache every time the method is called.
By some reason, the MyBatis option 'useCache="false"' in the query does not work.
The following Options annotation can be used:
#Options(useCache=false, flushCache=FlushCachePolicy.TRUE)
Apart from answers by Roman and Alexander there is one more solution for this:
Configuration configuration = MyBatisUtil.getSqlSessionFactory().getConfiguration();
Collection<Cache> caches = configuration.getCaches();
//If you have multiple caches and want a particular to get deleted.
// Cache cache = configuration.getCache("PPL"); // namespace of particular XML
for (Cache cache : caches) {
Lock w = cache.getReadWriteLock().writeLock();
w.lock();
try {
cache.clear();
} finally {
w.unlock();
}
}

EclipseLink can't retrieve entities inserted manually

I'm having some trouble with EclipseLink. My program has to interact with a database (representing a building). I've written a little input-testmode where I can manually insert stuff through the console.
My problem: a normal getByID-operation works just fine if I try to retrieve an entity I previously inserted through EclipseLink itself (by commit()), but throws a NoResultException when trying to select a row manually inserted via SQL-script (building -> lots of rooms -> script).
This (oversemplified) works fine:
main() {
MyRoom r = new MyRoom();
r.setID("floor1-roomnr4");
em.commit(r); //entity manager
DAO.getRoomByID("floor1-roomnr4"); // works
}
and the combination of generation-script + simply getRoomByID() throws an exception.
If I try it in SQL Developer I get the result I want for the exact select statement which just threw a NoResultException. I also only get this problem in the input-mode, otherwise selecting the generated rows works also fine.
Does EclipseLink have some cache-mechanism I'm unaware of which is causing some problem?
Are you sure EclipseLink and SQL Developer are connected to the same Database? Please verify the connection information for both. Is the generation-script committing the changes with the "commit" command?
If EclipseLink works similarly to Hibernate then yes there is a cache. The "first level cache" guaranties that you get the exact same instance within one transaction which makes sense. If you know EclipseLink/transactions then try to evict all loaded instances or commit the transaction and then try your DAO again. This would force EclipseLink to fetch the data from the database again
See Answer to similar question

Categories

Resources