Using cassandra triggers - java

I'm have a cassandra table like this:
keyspace_name| columnfamily_name | column_name | component_index |
-------------+-------------------+-------------+-----------------+
aw | test | as_of_date | 0 |
aw | test | data | 1 |
aw | test | record_id | null |
aw | test | upload_time | 1 |
And I'm won't to create trigger that will print(slf4j for example) rows that whould be inserted in next format:
key = key1
column_name1=value1
column_name2=value2
...
column_namen=valuen
Is it possible to get column name in trigger?
I try example from the internet, but it prints incorrect data.
public Collection<RowMutation> augment(ByteBuffer key, ColumnFamily update) {
String localKey = new String(key.array(), Charset.forName("UTF-8"));
logger.info("key={}.", localKey);
for (Column cell : update) {
try {
String name = ByteBufferUtil.string(cell.name());
logger.info("name={}.", name);
String value = ByteBufferUtil.string(cell.value());
logger.info("value={}.", value);
} catch (Exception e) {
logger.info("Exception={}.", e.getMessage());
}
}
As I understand, i am must convert cell.value() to specific data type like this:
Date date = TimestampType.instance.compose(cell.value());
But I don't know, how to detect field type and i am don't understand why i can't get column name using ByteBufferUtil.string(cell.name()).

To properly format cellname and values you must use the CFMetaData. The correct version of the code should be:
public Collection<Mutation> augment(ByteBuffer key, ColumnFamily update)
{
CFMetaData cfm = update.metadata();
String localKey = cfm.getKeyValidator().getString(key);
logger.info("key={}.", localKey);
for (Cell cell : update)
{
try
{
String name = cfm.comparator.getString(cell.name());
logger.info("name={}.", name);
String value = cfm.getValueValidator(cell.name()).getString(cell.value());
logger.info("value={}.", value);
} catch (Exception e) {
logger.info("Exception={}.", e.getMessage());
}
}
return Collections.emptyList();
}

Related

Between function in spark using java

I have two dataframe :
Dataframe 1
+-----------------+-----------------+
| hour_Entre | hour_Sortie |
+-----------------+-----------------+
| 18:30:00 | 05:00:00 |
| | |
+-----------------+-----------------+
Dataframe 2
+-----------------+
| hour_Tracking |
+-----------------+
| 19:30:00 |
+-----------------+
I want to take the hour_tracking that are between hour_Entre and hour_Sortie.
I tried the following code :
boolean checked = true;
try{
if(df1.select(col("heureSortie")) != null && df1.select(col("heureEntre")) !=null){
checked = checked && df2.select(col("dateTracking_hour_minute").between(df1.select(col("heureSortie")),df1.select(col("heureEntre"))));
}
} catch (Exception e) {
e.printStackTrace();
}
But I get this error :
Operator && cannot be applied to boolean , 'org.apache.spark.sql.Dataset<org.apache.spark.sql.Row>'
In case you are looking for hour difference -
1st create date difference
from pyspark.sql import functions as F
df = df.withColumn('date_diff', F.datediff(F.to_date(df.hour_Entre), F.to_date(df.hour_Sortie)))
Then calculate hour difference out of that -
df = df.withColumn('hours_diff', (df.date_diff*24) +
F.hour(df.hour_Entre) - F.hour(df.hour_Sortie))

How to get parameters of a step with multiple pipelines in cucumber for the java code?

I have below steps in my feature file for a scenario.
Given my_first_step
And my_second_step
| Themes | one | three |
| Service Windows | two | four |
And my_third_step
| Create Apps |
| Config |
we can get parameters of 'my_third_step' as below in the java code as a list
public void my_third_step(List listOfItems) {}
but how can get parameters in 'my_second_step' ?
I need to get a rows as array of elements in the java code. How can I do that ?
You have to pass a list of objects, your object will look like
public class MyObject {
private Integer themes;
private Integer service;
public Integer getThemes() {
return this.themes;
}
public void setThemes(Integer themes) {
this.themes = themes;
}
public Integer getService() {
return this.service;
}
public void setService(Integer service) {
this.service = service;
}
}
Then you can pass a List<MyObject> to the method.
public void my_second_step(List<MyObject>) {
...
}
In the feature file change the definition as follows:
And my_second_step
| Themes | Service |
| one | two |
| three | four |
I hope this helps.
Using Header we can implement Data Table in much clean & precise way and considering Data Table looks like below one -
And my_second_step
| Heading_1 | Heading_2 | Heading_3 |
| Themes | one | three |
| Service Windows | two | four |
public void my_second_step(DataTable table) throws Throwable {
List<Map<String, String>> list = table.asMaps(String.class,String.class);
System.out.println(list.get(0).get("Heading_1") + " : " + list.get(1).get("Heading_1"));
System.out.println(list.get(0).get("Heading_2") + " : " + list.get(1).get("Heading_2"));
System.out.println(list.get(0).get("Heading_3") + " : " + list.get(1).get("Heading_3"));
}

Spring,Hibernate with mysql image is not displaying properly

Yesterday I have posted this question..But did not get any answer.
In my project I have stored image file in mysql database.
the table is like:
+-----------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------------+------+-----+---------+----------------+
| upload_id | int(11) | NO | PRI | NULL | auto_increment |
| uId | int(20) unsigned | NO | MUL | NULL | |
| file_name | varchar(128) | YES | | NULL | |
| file_data | longblob | YES | | NULL | |
+-----------+---------------------+------+-----+---------+----------------+
Now I am fetching image data my dao class is:--
public List<ImageClass> fetchallimage(int uId) {
ImageClass imageClass= new ImageClass(uId);
String hql = "FROM ImageClass WHERE uId = :uId1 ";
Query query = (Query) sessionFactory.getCurrentSession().createQuery(hql).setParameter("uId1", imageClass.getUserId());
return query.list();
}
My ImageClass is:--
#Table(name = "imageStore")
public class ImageClass{
private long id;
private String fileName;
private byte[] data;
---getters and settrs with #Column annotations----
}
My controller class is:-
#RequestMapping(method = RequestMethod.GET)
public String ImageFetch(Map<String, Object> map,HttpServletRequest request,
HttpSession session,HttpServletResponse response) {
int uid= (int) session.getAttribute("uId");
List<ImageClass> imageClass;
imageClass= PicService.fetchallimage(uid);
map.put("image",imageClass);
return "account";
}
}
and jsp page is:-
<c:forEach items="${image}" var="info">
<div style="width:380px;display:block;text-align:center;">
<img src="${info.data}"
border="0" alt="Dating" style="margin:10px;padding:15px;border:10px solid #FEB4DE;background:#FECDE9;"></div>
</c:forEach>
But the picture is displaying,like below:-
In page image looks like: [B#1284b24
why??what I have to do to achieve the image??Please guys suggest me.
I have edited :--
for (ImageClass temp : imageClass) {
byte[] encodeBase64 = Base64.encode(temp.getData());
String base64Encoded = new String(encodeBase64, "UTF-8");
map.put("image",base64Encoded); }
and in jsp :--<img src="data:image/jpeg;base64,${image}" />
still not getting picture.
in logger I am getting output:--
You have to convert that imageTo String and then you can easily display in JSP. I presume its a byte[]. Use a #Transient variable, it will help :
import sun.misc.BASE64Encoder;
BASE64Encoder base64Encoder = new BASE64Encoder();
object.setTransientString("data:image/png;base64," + base64Encoder.encode(object.getByteArrayDataVariable()));
In jsp, use the img tag and put the String variable there. Enjoy.

Jena: Getting an empty result set

I am getting an empty result set when I try to retrieve the data stored in a jena model.
This is the code to load the data (I have removed the imports for brevity)
package basic;
//imports here
public class DataLoaderQn {
public static void main(String[] args) throws FileNotFoundException {
String resourceURI = "http://www.abc123.com/riskmodelling/static/risks";
String directory = "D:/mywork/dp/projs/static-risks/data/test";
Dataset dataset = TDBFactory.createDataset(directory);
Model model = null;
try {
dataset.begin(ReadWrite.WRITE);
model = dataset.getDefaultModel();
model.enterCriticalSection(Lock.WRITE);
model = model.removeAll();
Resource projectNatureRes = model.createResource(resourceURI+":PROJECT_NATURE");
Property projectNatureRiskProps = model.createProperty(resourceURI + ":COMPLEX_FUNCTIONALITY");
Bag projectNatureRisks = model.createBag();
projectNatureRisks.add("More defects");
projectNatureRisks.add("Effort estimation inaccurate");
projectNatureRes.addProperty(projectNatureRiskProps, projectNatureRisks);
Property migrationRiskProps = model.createProperty(resourceURI + ":MIGRATION");
Bag migrationRisks = model.createBag();
migrationRisks.add("Lack of knowledge of exsting application");
migrationRisks.add("Documentation not available");
projectNatureRes.addProperty(migrationRiskProps, migrationRisks);
model.write(System.out);
model.write(new FileOutputStream(new File(directory + "/Project_risk.ttl")), "N-TRIPLES");
model.commit();
dataset.commit();
TDB.sync(model);
} finally {
dataset.end();
model.leaveCriticalSection();
}
}
}
And this is how I read in the data from a java program
public class ReadBasicQn {
public static void main(String[] args) {
String directory = "D:/mywork/dp/projs/static-risks/data/test";
Dataset dataset = TDBFactory.createDataset(directory);
Model model = null;
ResultSet rs;
QueryExecution qexeExecution = null;
try{
/*model = ModelFactory.createDefaultModel();
TDBLoader.loadModel(model, directory + "/Project_risk.ttl");*/
model = dataset.getDefaultModel();
String queryString = "PREFIX proj: <http://www.abc123.com/riskmodelling/static/risks#> ";
queryString += "select ?risks where ";
queryString += "{proj:PROJECT_NATURE proj:COMPLEX_FUNCTIONALITY ?risks}";
String queryString2 = "SELECT * WHERE { ?s ?p ?o }";
Query q = QueryFactory.create(queryString);
qexeExecution = QueryExecutionFactory.create(q, model);
rs = qexeExecution.execSelect();
ResultSetFormatter.out(System.out, rs);
qexeExecution.close();
q = QueryFactory.create(queryString2);
qexeExecution = QueryExecutionFactory.create(q, model);
rs = qexeExecution.execSelect();
ResultSetFormatter.out(System.out, rs);
/*while(rs.hasNext()){
QuerySolution qSol = rs.nextSolution();
RDFNode n = qSol.get("risks");
System.out.println(n);
}*/
}finally{
qexeExecution.close();
}
}
}
The output of the second query (select *) from the ResultSetFormatter shows
------------------------------------------------------------------------------------------------------------------------------------------------
| s | p | o |
===================================================================================================================================================================================================
| <http://www.abc123.com/riskmodelling/static/risks:PROJECT_NATURE> | <http://www.abc123.com/riskmodelling/static/risks:COMPLEX_FUNCTIONALITY> | _:b0 |
| <http://www.abc123.com/riskmodelling/static/risks:PROJECT_NATURE> | <http://www.abc123.com/riskmodelling/static/risks:MIGRATION> | _:b1 |
| _:b0 | <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#Bag> |
| _:b0 | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_1> | "More defects" |
| _:b0 | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_2> | "Effort estimation inaccurate" |
| _:b1 | <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#Bag> |
| _:b1 | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_1> | "Lack of knowledge of exsting application" |
| _:b1 | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_2> | "Documentation not available" |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
which means data is available and correctly loaded (correct ?). The custom query however returns the following output.
---------
| risks |
=========
---------
Any help is appreciated. I have just about started with Jena, so maybe I am doing something really foolish.
There's a typo. Your prefix ends in #, but in your URIs, there's a :. Here's your prefix declaration and URI, lined up:
prefix proj: http://www.abc123.com/riskmodelling/static/risks#>
http://www.abc123.com/riskmodelling/static/risks:PROJECT_NATURE
^
|
In your URI, it's risks[COLON]PROJECT_NATURE, not risks[HASH]PROJECT_NATURE. You need to change your prefix to:
prefix proj: http://www.abc123.com/riskmodelling/static/risks:>
or change your data to
Resource projectNatureRes = model.createResource(resourceURI+"#PROJECT_NATURE");
// ...and in a few other places, too
I had a similar issue, but for me it came from a corrupt TDB Dataset that early on didn't throw an exception but returned results similar to the OP. Another variation of a strange result is a blank row in place of a result that I received. I was forced to rebuild the Dataset from the original source again once I received the Impossible Large Object exception. So if you see weird results, rebuilding the dataset from scratch (clearing previously existing Dataset files from the HDD) might be one way to go, depending on your situation of course.

Actionlisterner change the entire row

I have JComboBox in my table. If user selected "Others" from the ComboBox i need to hide column number 3 in the table.
Code
final TableColumn col5 = jTable1.getColumnModel().getColumn(4);
col5.setPreferredWidth(150);
final String EDIT = "edit";
String[] options = new String[]{"Font Issue", "Text Issue", "Image Issue", "AI Issue", "Others"};
JComboBox combo1 = new JComboBox(options);
JComboBox combo2 = new JComboBox(options);
col5.setCellEditor(new DefaultCellEditor(combo1));
col5.setCellRenderer(new ComboBoxRenderer(combo2));
combo2.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
String newSelection = col5.getCellEditor().getCellEditorValue().toString();
String strOthersRemark = "";
if (newSelection.equalsIgnoreCase("others")) {
jTable1.removeColumn(jTable1.getColumnModel().getColumn(3));
}
}
});
The code working fine but with one small issue. When user select others it removed the entire column instead the row.For an example
Row|Column1 | Column2 | Column3 | Column4 |
1 | Test11 | Test12 | Test13 | Test14 |
2 | Test21 | Test22 | Test23 | Test24 |
3 | Test31 | Test32 | Test33 | Others |
When user select Column4 as Others it should hide the Test33, not entire Column3. My code remove entire Column3. What should I do if I want to hide Test33 only
You're removing the column:
jTable1.removeColumn(jTable1.getColumnModel().getColumn(3));
Instead you should change the value at certain cell.
Use this method instead: table.setValueAt(). Java doc: setValueAt
In your example:
jTable1.setValueAt("", 3, 3);

Categories

Resources