I have an XML input file that looks like:
<mbean className="OperatingSystem">
<attribute>
<attributeName>Arch</attributeName>
<formatter>STRING</formatter>
</attribute>
<attribute>
<attributeName>ProcessCpuLoad</attributeName>
<formatType>PERCENT</formatType>
</attribute>
</mbean>
I've created a POJO called 'Mbeans' that looks like:
#XmlRootElement(name = "mbean")
#XmlAccessorType(XmlAccessType.FIELD)
public class Mbean
{
#XmlElement(name = "attribute")
private List<Attribute> attributes = null;
#XmlAttribute(name = "className")
private String className;
public String getClassName() {
return className;
}
}
I can successfully unmarshall my XML File into this POJO and my application can use this object as needed. This input file tells me the information I need to pull from a particular MBean I have. Is there a way to create multiple tables based on the XML file, such that when I pull said information I can store that information into said table structure and then use JDBC to create SQL tables on my H2 database?
For example, I would like to create tables that look like:
+------------------------+
| MBeans |
+------+-----------------+
| ID | MBeanName |
+------+-----------------+
| 1 | OperatingSystem |
+------+-----------------+
+--------------------------------+
| Attributes |
+------+--------+----------------+
| ID | MbeanId| AttributeName |
+------+--------+----------------+
| 1 | 1 | Arch |
+------+--------+----------------+
| 2 | 1 | ProcessCpuLoad |
+------+--------+----------------+
+------------------------------------+
| OperatingSystem.Arch |
+------+--------+------------+-------+
| ID | MbeanId| AttributeId| Value |
+------+--------+------------+-------+
| 1 | 1 | 1 | amd64 |
+------+--------+------------+-------+
| 2 | 1 | 1 | amd64 |
+------+--------+------------+-------+
+------------------------------------+
| OperatingSystem.ProcessCpuLoad |
+------+--------+------------+-------+
| ID | MbeanId| AttributeId| Value |
+------+--------+------------+-------+
| 1 | 1 | 2 | 0.009 |
+------+--------+------------+-------+
| 2 | 1 | 2 | 0.0691|
+------+--------+------------+-------+
I would first make :
A method mapping className into table name public String getTableName(String className)
A method mapping attributeName into clomun name public String getColumnName(String attributeName)
A method mapping formatType or formatter into the database types public String getType(String formatType)
and then
public void createTable(Mbean bean) throws SQLException{
String sql = getCreateTable(bean);
// execute SQL using JDBC...
}
private String getCreateTable(Mbean bean) {
String sqlStart = "CREATE TABLE " + getTableName(bean.getClassName()) + " (" ;
return bean.getAttributes().stream()
.map(attribute -> mapToColumn(attribute))
.collect(Collectors.joining(", ", sqlStart, ")"); // what about primary key?
}
private String mapToColumn(Attribute a) {
return getColumnName(a.getName()) + " " + getType(/*it depends*/);
}
Related
I have below steps in my feature file for a scenario.
Given my_first_step
And my_second_step
| Themes | one | three |
| Service Windows | two | four |
And my_third_step
| Create Apps |
| Config |
we can get parameters of 'my_third_step' as below in the java code as a list
public void my_third_step(List listOfItems) {}
but how can get parameters in 'my_second_step' ?
I need to get a rows as array of elements in the java code. How can I do that ?
You have to pass a list of objects, your object will look like
public class MyObject {
private Integer themes;
private Integer service;
public Integer getThemes() {
return this.themes;
}
public void setThemes(Integer themes) {
this.themes = themes;
}
public Integer getService() {
return this.service;
}
public void setService(Integer service) {
this.service = service;
}
}
Then you can pass a List<MyObject> to the method.
public void my_second_step(List<MyObject>) {
...
}
In the feature file change the definition as follows:
And my_second_step
| Themes | Service |
| one | two |
| three | four |
I hope this helps.
Using Header we can implement Data Table in much clean & precise way and considering Data Table looks like below one -
And my_second_step
| Heading_1 | Heading_2 | Heading_3 |
| Themes | one | three |
| Service Windows | two | four |
public void my_second_step(DataTable table) throws Throwable {
List<Map<String, String>> list = table.asMaps(String.class,String.class);
System.out.println(list.get(0).get("Heading_1") + " : " + list.get(1).get("Heading_1"));
System.out.println(list.get(0).get("Heading_2") + " : " + list.get(1).get("Heading_2"));
System.out.println(list.get(0).get("Heading_3") + " : " + list.get(1).get("Heading_3"));
}
I have a product class where there is an #OneToMany association for a list of buyers. What I want to do is that buyer search performed by the association when I search for a product, use a null constraint for the end date column of the Buyer table. How to do this in a list mapping like this, below.
// it would be something I needed cri.createCriteria("listaBuyer", "buyer).add(Restriction.isNull("finalDate"));
Example
Registered data
product code | initial date | final date |
-------------------------------------------------------
1 | 2016-28-07 | 2017-28-07 |
------------------------------------------------------
2 | 2016-10-08 | 2017-28-07 |
------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
Product Class
public class Product {
#OneToMany(targetEntity=Buyer.class, orphanRemoval=true, cascade={CascadeType.PERSIST,CascadeType.MERGE}, mappedBy="product")
#LazyCollection(LazyCollectionOption.FALSE)
public List<Buyer> getListaBuyer() {
if (listaBuyer == null) {
listaBuyer = new ArrayList<Buyer>();
}
return listaBuyer;
}
built criterion
Criteria cri = getSession().createCriteria(Product.class);
cri.createCriteria("status", "sta");
cri.add(Restrictions.eq("id", Product.getId()));
return cri.list();
Expected outcome
product code | initial date | final date |
-------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
Returned result
product code | initial date | final date |
-------------------------------------------------------
1 | 2016-28-07 | 2017-28-07 |
------------------------------------------------------
2 | 2016-10-08 | 2017-28-07 |
------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
I am running a map only job in Hadoop. The data-set is a set of html pages in a single file (returned by a crawler)
The mapper code is written in Java. I am using JSoup to parse. What I want as my output is a key that has both the contents of the title tag and the content of a meta tag. Ideally I should get 1592 records for my map output records. I am getting 3184.
The concatenation I attempt to do with this line of code is not happening.
String MN_Job = (jobT + "\t" + jobsDetail);
What I get instead is each of these separately, hence double the number of outputs. What am I doing wrong here?
public class JobsDataMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text keytext = new Text();
private Text valuetext = new Text();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Document doc = Jsoup.parse(line);
Elements desc = doc.select("head title, meta[name=twitter:description]");
for (Element jobhtml : desc) {
Elements title = jobhtml.select("title");
String jobT = "";
for (Element titlehtml : title) {
jobT = titlehtml.text();
}
Elements meta = jobhtml.select("meta[name=twitter:description]");
String jobsDetail ="";
for (Element metahtml : meta) {
String content = metahtml.attr("content");
String content1 = content.replaceAll("\\p{Punct}+", " ");
jobsDetail = content1.replaceAll(" (?i)a | (?i)able | (?i)about | (?i)across | (?i)after | (?i)all | (?i)almost | (?i)also | (?i)am | (?i)among | (?i)an | (?i)and | (?i)any | (?i)are | (?i)as | (?i)at | (?i)be | (?i)because | (?i)been | (?i)but | (?i)by | (?i)can | (?i)cannot | (?i)could | (?i)dear | (?i)did | (?i)do | (?i)does | (?i)either | (?i)else | (?i)ever | (?i)every | (?i)for | (?i)from | (?i)get | (?i)got | (?i)had | (?i)has | (?i)have | (?i)he | (?i)her | (?i)hers | (?i)him | (?i)his | (?i)how | (?i)however | (?i)i | (?i)if | (?i)in | (?i)into | (?i)is | (?i)it | (?i)its | (?i)just | (?i)least | (?i)let | (?i)like | (?i)likely | (?i)may | (?i)me | (?i)might | (?i)most | (?i)must | (?i)my | (?i)neither | (?i)no | (?i)nor | (?i)not | (?i)nbsp | (?i)of | (?i)off | (?i)often | (?i)on | (?i)only | (?i)or | (?i)other | (?i)our | (?i)own | (?i)rather | (?i)said | (?i)say | (?i)says | (?i)she | (?i)should | (?i)since | (?i)so | (?i)some | (?i)than | (?i)that | (?i)the | (?i)their | (?i)them | (?i)then | (?i)there | (?i)these | (?i)they | (?i)this | (?i)tis | (?i)to | (?i)too | (?i)twas | (?i)us | (?i)wants | (?i)was | (?i)we | (?i)were | (?i)what | (?i)when | (?i)where | (?i)which | (?i)while | (?i)who | (?i)whom | (?i)why | (?i)will | (?i)with | (?i)would | (?i)yet | (?i)you | (?i)your "," ");
}
String IT_Job = (jobT + "\t" + jobsDetail);
keytext.set(IT_Job) ;
valuetext.set("JobDetail");
context.write( keytext, valuetext );
}
}
}
Edit: I know what the problem is. But the thing is that the solution might not be obvious in MapReduce. You might have to write your custom RecordReader. Let me explain the problem.
In your code you read line by line. Then you apply this to the line you read:
Elements desc = doc.select("head title, meta[name=twitter:description]");
But evidently, it might only have a title or a <meta name=twitter:description> tag. So you read one of those and store it. The other one remains blank. So at a time, only one of your variables, jobT and jobsDetail has any data. So for the code snippet:
String IT_Job = (jobT + "\t" + jobsDetail);
one time, the first one is blank and the second time, the other one is blank. So if you are expecting n records, you get 2n records. Similarly, if you'll attempt to extract three fields, then you should get 3n records. So you can test this theory by extracting another field and then checking if you are getting thrice the number of expected records.
If the theory turns out to be correct, you might want to delimit the webpages you extract with a specific delimiter string. Then you want to write a custom RecordReader which will read one html file at a time according to the delimiter and then process the entire html file at once. That way you'll get the title and the meta tags together.
Just by the look at the numbers: 3184/2 = 1592.
I think that your file is just duplicated in the input folder. I can't tell for sure, because you have not given the code how you submit the job, but maybe you can verify it with a simple:
bin/hadoop fs -ls /your/input_path
When submitting, either make sure that there is just the single file in there, or just reference the single file in your submission logic.
I made changes to the original code removing the loops that were not necessary. What was happening the older code was that when there is a title in the record, it is output, and later when there is a content, it is output as well. So, there are two writes per HTML file.
public class JobsDataMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text keytext = new Text();
private Text valuetext = new Text();
private String jobT = new String();
private String jobName= new String();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Document doc = Jsoup.parse(line);
Elements desc = doc.select("head title, meta[name=twitter:description]");
for (Element jobhtml : desc){
Elements title = jobhtml.select("title");
String jobTT = title.text();
jobT =jobTT ;
if (jobT.length()> 0){
jobName=jobTT;
}
Elements meta = jobhtml.select("meta[name=twitter:description]");
String jobsDetail ="";
String content = meta.attr("content");
String content1 = content.replaceAll("\\p{Punct}+", " ");
jobsDetail = content1.toLowerCase();
jobsDetail = content1.replaceAll(" a| able | about | across | after | all | almost | also | am | among | an | and | any | are | as | at | be| because | been | but | by | can | cannot | could | dear | did | do | does | either | else | ever | every | for | from | get | got | had | has | have | he | her | hers | him | his | how | however | i | if | in | into | is | it | its | just | least | let | like | likely | may | me | might | most | must | my | neither | no | nor | not | nbsp | of | off | often | on | only | or | other | our | own | rather | said | say | says | she | should | since | so | some | than | that | the | their | them | then | there | these | they | this | tis | to | too | twas | us | wants | was | we | were | what | when | where | which | while | who | whom | why | will | with | would | yet | you | your "," ");
if (jobsDetail.length()>0) {
String MN_Job = (jobName+ "\t" + jobsDetail);
keytext.set(MN_Job) ;
valuetext.set("JobInIT");
context.write( keytext, valuetext );
}
}
}
}
I have a problem in implementing the tree structure of OID. when I click the parent , i need to display only child details, not the sub child of a child.
i.e., i need not display an OID which contains a "." (dot).
For example, if my OID structure is private.MIB.sample.first
private.MIB.sample.second and so on.
when I click on MIB, it should display only "sample" not first and second.
first and second is to be displayed when I click sample.
How can I implement this in java.
My datyabase is MySQL. The code which I tried is given below
FilteredRowSet rs = new FilteredRowSetImpl();
// for Other Types Like OBJECT-TYPE, Object_IDENTIFIER
rs = new FilteredRowSetImpl();
rs.setCommand("Select * from MIBNODEDETAILS where " + "mn_OID like '" + OID
+ ".%' order by mn_NodeType, mn_OID");
rs.setUrl(Constants.DB_CONNECTION_URL);
rs.setFilter(new MibRowFilter(1, expString));
rs.execute();
rs.absolute(1);
rs.beforeFirst();
I guess the change is to be made in the setCommand argument.
How can I do this?
Structure of mobnodedetails table
+--------------------+-------------------+-------------+
| mn_OID | mn_name | mn_nodetype |
+--------------------+-------------------+-------------+
| 1 | iso | 0 |
| 1.3 | org | 1 |
| 1.3.6 | dod | 1 |
| 1.3.6.1 | internet | 1 |
| 1.3.6.1.1 | directory | 1 |
| 1.3.6.1.2 | mgmt | 1 |
| 1.3.6.1.2.1 | mib-2 | 0 |
| 1.3.6.1.2.1.1 | system | 1 |
| 1.3.6.1.2.1.10 | transmission | 1 |
You can use something like
SELECT *
FROM mibnodedetails
WHERE mn_oid LIKE "+mn_OID+"%
AND LENGTH ("+mn_OID+") + 2 = LENGTH (mn_oid)
ORDER BY mn_nodetype, mn_oid
So if you pass mm_OID as 1.3.6.1 (|1.3.6.1 |internet |1 |)
You will get following result:
| 1.3.6.1.1 | directory | 1 |
| 1.3.6.1.2 | mgmt | 1 |
Working Demo
PS: This will not work for child more than 9 as we are using length + 2
The function given below dispalys the tree as required.
public void populateMibValues()
{
final DefaultTreeModel model = (DefaultTreeModel) this.m_mibTree.getModel();
model.setRoot(null);
this.rootNode.removeAllChildren();
final String query_MibNodeDetailsSelect = "Select * from MIBNODEDETAILS where LENGTH(mn_oid)<=9 "
+ " and mn_OID<='1.3.6.1.4.1' order by mn_OID"; // only
this.innerNodeNames.clear();
this.innerNodes.clear();
this.innerNodesOid = null;
try {
final ResultSet deviceRS = Application.getDBHandler().executeQuery(query_MibNodeDetailsSelect, null);// inner
// nodes
while (deviceRS.next()) {
final mibNode mb = new mibNode(deviceRS.getString("mn_OID").toString(), deviceRS.getString("mn_name")
.toString());
mb.m_Type = Integer.parseInt(deviceRS.getString("mn_nodetype").toString());
createMibTree(mb);
}
}
catch (final Exception e) {
Application.showErrorInConsole(e);
NmsLogger.writeErrorLog("ERROR creating MIB tree failed", e.toString());
}
Is there any tool that could be used to generate some code for apache Axis2 from a (my)sql schema. For example, the following schema:
desc name;
+-------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+-------+
| id | int(11) | NO | PRI | 0 | |
| name | longtext | NO | MUL | | |
+-------+------------------+------+-----+---------+-------+
would generate... :
interface Name
{
public long getId();
public String getName();
}
interface MyService
{
public Name getNameById(long id);
public List<Name> getNamesByName(String name);
}
with implementation, wsdl etc....
thanks
I have a tool that I am in the process of open sourcing which may meet your needs. Please drop me a line at tshivaraj gmail.