I'm using Astyanax version 1.56.26 with Cassandra version 1.2.2
Ok, a little situational overview:
I have created a very simple column family using cqlsh like so:
CREATE TABLE users (
name text PRIMARY KEY,
age int,
weight int
);
I populated the column family (no empty columns)
Querying users via cqlsh yields expected results
Now I want to programmatically query users, so I try something like:
ColumnFamily<String, String> users =
new ColumnFamily<String, String>("users", StringSerializer.get(), StringSerializer.get());
OperationResult<ColumnList<String>> result = ks.prepareQuery(users)
.getRow("bobbydigital") // valid rowkey
.execute();
ColumnList<String> columns = result.getResult();
int weight = columns.getColumnByName("weight").getIntegerValue();
During the assignment of weight a NPE is thrown! :(
My understanding is that the result should have contained all the columns associated with the row containing "bobbydigital" as its row key. I then tried to assign the value in the column named "weight" to the integer variable weight. I know that the variable columns is getting assigned because when I add some debug code right after the assignment, like so:
System.out.println("Column names = " + columns.getColumnNames());
I get the following output:
Column names = [, age, weight]
So why the null pointer? Can someone tell me where I went wrong? Also, why is there blank column name?
UPDATE:
Also if I try querying in a different manner, like so:
Column<String> result = ks.prepareQuery(users)
.getKey("bobbydigital")
.getColumn("weight")
.execute().getResult();
int x = result.getIntegerValue();
I get the following exception:
InvalidRequestException(why:Not enough bytes to read value of component 0)
Thanks in advance for any help you can provide!
I figured out what I was doing incorrectly. The style of querying I was attempting is not valid with CQL tables. To query CQL tables with Astyanax you need to chain the .withCQL method to your prepareQuery method; passing a CQL statement as the argument.
Information specific to using CQL with Astyanax can be found here.
I got this fixed by adding setCqlVersion
this.astyanaxContext = new AstyanaxContext.Builder()
.forCluster("ClusterName")
.forKeyspace(keyspace)
.withAstyanaxConfiguration(
new AstyanaxConfigurationImpl().setCqlVersion(
"3.0.0").setDiscoveryType(
And adding WITH COMPACT STORAGE while creating the table.
Related
I unsuccessfully attempted to leverage Java's DerivedQueries but cannot accomplish the required result so I have to manually write a SELECT Statement.
I want to display one single record in my UI. This should be the most recently generated record (which means it has the highest ID Number) associated with a category that we call "ASMS". In other words, look through all the rows that have ASMS#123, find the one that has the highest ID and then return the contents of one column cell.
ASMS: Entries are classified by 11 specific ASMS numbers.
ID: AutoGenerated
PPRECORD: New entries being inserted each day
I hope the image makes more sense.
//RETURN ONLY THE LATEST RECORD
//https://besterdev-api.apps.pcfepg3mi.gm.com/api/v1/pprecords/latest/{asmsnumber}
#RequestMapping("/pprecords/latest/{asmsNumber}")
public List<Optional<PriorityProgressEntity>> getLatestRecord(#PathVariable(value = "asmsNumber") String asmsNumber) {
List<Optional<PriorityProgressEntity>> asms_number = priorityprogressrepo.findFirst1ByAsmsNumber(asmsNumber);
return asms_number;}
The ReactJS FE makes an AXIOS.get and I can retrieve all the records associated with the ASMS, but I do not have the skill to display only JSON object that has the highest ID value. I'm happy to do this in the FE also.
I tried Derived Queries. .findFirst1ByAsmsNumber(asmsNumber) does not consider the highest ID number.
Try this:
SELECT pprecord FROM YourTable WHERE id =
(SELECT MAX(id) FROM YourTable WHERE asms = '188660')
Explanation:
First line select pprecord, second line select the id
I'll improve the answer if any additional question. Upvotes and acceptions are appreciated~
I want to update rows on a table which contains the following colums:
`parameter_name`(PRIMARY KEY),
`option_order`,
`value`.
I have a collection called parameterColletion which contains "parameterNames", "optionOrders" and "values". This collection does not have a fixed value, it can receive the quantity of parameters you want to.
Imagine I have 5 parameters inside my collection (I could have 28, or 10204 too) and I am trying to update the rows of the database using the next query. Example of query:
UPDATE insight_app_parameter_option
SET option_order IN (1,2,3,4,5), value IN ('a','b','c','d','e')
WHERE parameter_name IN ('name1', 'name2', 'name3', 'name4', 'name5')
But this isn't doing the job, instead it gives back an error which says You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IN (1,2,3,4,5), value IN ('a','b','c','d','e') WHERE parameter_name IN ('name1'' at line 2
1,2,3,4,5 -> Represent the option orders inside parameterCollection.
'a','b','c','d','e' -> Represent the values inside parameterCollection.
'name1', 'name2', 'name3', 'name4', 'name5' -> Represent the names inside parameterCollection.
I know how to update each parameter by separate but i would like to do it all together. Here are some links I visited where people asked the same question but they used a fixed colletion of objects, not a mutable one.
MySQL - UPDATE multiple rows with different values in one query
Multiple rows update into a single query
SQL - Update multiple records in one query
That's not possible with MySQL. The error you are receiving is a syntax error. You are not able to set multiple values at once. This is the correct syntax to a UPDATE statement: (ref)
UPDATE [LOW_PRIORITY] [IGNORE] table_reference
SET assignment_list
[WHERE where_condition]
[ORDER BY ...]
[LIMIT row_count]
value:
{expr | DEFAULT}
assignment:
col_name = value
assignment_list:
assignment [, assignment] ...
You need to create separate UPDATEs for each row. I suggest executing all in a single transaction, if its the case.
The correct syntax for your example is:
UPDATE insight_app_parameter_option
SET option_order = 1, value = 'a'
WHERE parameter_name = 'name1';
UPDATE insight_app_parameter_option
SET option_order = 2, value = 'b'
WHERE parameter_name = 'name2';
UPDATE insight_app_parameter_option
SET option_order = 3, value = 'c'
WHERE parameter_name = 'name3';
...
I'm migrating a Java application to VB.Net and I try to translate following Java code
Statement stmt
= conx.createStatement
(ResultSet.TYPE_SCROLL_INSENSITIVE
,ResultSet.CONCUR_READ_ONLY
);
ResultSet rsSheet = stmt.executeQuery(sSql);
bStatus = rsSheet.next();
...
bStatus = rsSheet.first();
In this code, a scrollable ResultSet is used. I can read the records returned by executeQuery() function and when I have terminated to read them, I can read it again without interrogating the Database a second time.
You can find some information on ResultSet here https://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html
My translated code is following
Dim cmd = conx.CreateCommand()
cmd.CommandText = sSql
Dim rsSheet as OracleDataReader = cmd.ExecuteReader()
bStatus = rsSheet.Read()
...
bStatus = rsSheet.? 'how to read first record again ?
But I don't find how to do so that OracleDataReader is scrollable ?
I can read the ResultSet from first to last record but I cannot read it again.
The only simple solution that I found to read all these records again is to call ExecuteReader() function a second time.
QUESTIONS
Is OracleDataReader Class scrollable ? How ?
Does another Class exist to do the job ? Which ?
PS: using Linq is not solution because SQL statements are executed in an environnement where Database structure is unknown. It is impossible to create entities.
A DataReader is one way only. Use a DataTable. This is an in memory representation of the result set. You can also use a DataTable to as a DataSource for various controls. You can use Linq on a DataTable.AsEnumerable()
Private Sub OPCode()
Dim sSql = "Your command text"
Dim dt As New DataTable
Using cn As New OracleConnection(ConStr),
cmd As New OracleCommand(sSql, cn)
cn.Open()
dt.Load(cmd.ExecuteReader)
End Using
'Code to read data.
End Sub
EDIT
The simplest way to see what is in the DataTable is to display it in a DataGridView if this is WinForms.
DataGridView1.DataSource = dt.
To access a specific row and column.
Dim s = dt.Rows(1)("ColName").ToString
The Rows collection starts with index 0 and the column name is from you Select statement. You then need to convert to the datatype with .ToString or Cint(), CDbl() etc. as this returns an object.
I'm new to couchbase. I'm using Java for this. I'm trying to remove a document from a bucket by looking up its ID with query parameters(assuming the ID is unknown).
Lets say I have a bucket called test-data. In that bucked I have a document with ID of 555 and Content of {"name":"bob","num":"10"}
I want to be able to remove that document by querying using 'name' and 'num'.
So far I have this (hardcoded):
String statement = "SELECT META(`test-data`).id from `test-data` WHERE name = \"bob\" and num = \"10\"";
N1qlQuery query = N1qlQuery.simple(statement);
N1qlQueryResult result = bucket.query(query);
List<N1qlQueryRow> row = result.allRows();
N1qlQueryRow res1 = row.get(0);
System.out.println(res1);
//output: {"id":"555"}
So I'm getting a json that has the document's ID in it. What would be the best way to extract that ID so that I can then remove the queryed document from the bucket using its ID? Am I doing to many steps? Is there a better way to extract the document's ID?
bucket.remove(docID)
Ideally I'd like to use something like a N1q1QueryResult to get this going but I'm not sure how to set that up.
N1qlQueryResult result = bucket.query(select("META.id").fromCurrentBucket().where((x("num").eq("\""+num+"\"")).and(x("name").eq("\""+name+"\""))));
But that isn't working at the moment.
Any help or direction would be appreciated. Thanks.
There might be a better way which is running this kind of query:
delete from `test-data` use keys '00000874a09e749ab6f199c0622c5cb0' returning raw META(`test-data`).id
or if your fields has index:
delete from `test-data` where name='bob' and num='10' returning raw META(`test-data`).id
This query deletes the specified document with given document key (which is meta.id) and returns document id of deleted document if it deletes any document. Returns empty if no documents deleted.
You can implement this query with couchbase sdk as follows:
Statement statement = deleteFrom("test-data")
.where(x("name").eq(s("bob")).and(x("num").eq(s("10"))))
.returningRaw(meta(i("test-data")).get("id"));
You can make this statement parameterized or just execute like that.
I am having a query wherein I am fetching out sum of a column from a table formed through sub query.
Something in the lines:
select temp.mySum as MySum from (select sum(myColumn) from mySchema.myTable) temp;
However, I don't want MySum to be null when temp.mySum is null. Instead I want MySum to carry string 'value not available' when temp.mySum is null.
Thus I tried to use coalesce in the below manner:
select coalesce(temp.mySum, 'value not available') as MySum from (select sum(myColumn) from mySchema.myTable) temp;
However above query is throwing error message:
Message: The data type, length or value of argument "2" of routine "SYSIBM.COALESCE" is incorrect.
This message is because of datatype incompatibility between argument 1 and 2 of coalesce function as mentioned in the first answer below.
However, I am directly using this query in Jasper to send values to Excel sheet report:
hashmap.put("myQuery", this.myQuery);
JasperReport jasperReportOne = JasperCompileManager.compileReport(this.reportJRXML);
JasperPrint jasperPrintBranchCd = JasperFillManager.fillReport(jasperReportOne , hashmap, con);
jprintList.add(jasperPrintOne);
JRXlsExporter exporterXLS = new JRXlsExporter();
exporterXLS.setParameter(JRExporterParameter.JASPER_PRINT_LIST, jprintList);
exporterXLS.exportReport();
In the excel sheet, I am getting value as null when the value is not available. I want to show 'value unavailable' in the report.
How could this be achieved ?
Thanks for reading!
The arguments to coalesce must be compatible. That's not the case if the first is numeric (as mySum probably is) and the second is a string.
For example, the following PubLib doco has a table indicating compatibility between various types, at least for the DB2 I work with (the mainframe one) - no doubt there are similar restrictions for the iSeries and LUW variants as well.
You can try something like coalesce(temp.mySum, 0) instead or convert the first argument to a string with something like char(). Either of those should work since they make the two arguments compatible.