Generating Apache AXIS2 WebServices from a (my)SQL schema? - java

Is there any tool that could be used to generate some code for apache Axis2 from a (my)sql schema. For example, the following schema:
desc name;
+-------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+-------+
| id | int(11) | NO | PRI | 0 | |
| name | longtext | NO | MUL | | |
+-------+------------------+------+-----+---------+-------+
would generate... :
interface Name
{
public long getId();
public String getName();
}
interface MyService
{
public Name getNameById(long id);
public List<Name> getNamesByName(String name);
}
with implementation, wsdl etc....
thanks

I have a tool that I am in the process of open sourcing which may meet your needs. Please drop me a line at tshivaraj gmail.

Related

Springfox : Short type is considered as $int32

Language
Java / Springboot / Springfox[ 2.9.2]
Description
Hi, i'm using springfox-swagger-ui and springfox-bean-validators
How can i tell to swagger that my property is a Short ($int16)
My Pojo
#ApiModelProperty(required = true, dataType = "java.lang.Short")
#NotNull
#JsonProperty("deviceId")
private Short deviceId;
The result in Swagger
deviceId* | integer($int32)
Expected
deviceId* | $int16
Thanks a lot
Cordially
Unfortunately you can't.
Swagger specification does not support short (int16) data type.
Supported data types:
+-------------+---------+-----------+--------------------------------------------------+
| Common Name | type | format | Comments |
+-------------+---------+-----------+--------------------------------------------------+
| integer | integer | int32 | signed 32 bits |
| long | integer | int64 | signed 64 bits |
| float | number | float | |
| double | number | double | |
| string | string | | |
| byte | string | byte | base64 encoded characters |
| binary | string | binary | any sequence of octets |
| boolean | boolean | | |
| date | string | date | As defined by full-date - RFC3339 |
| dateTime | string | date-time | As defined by date-time - RFC3339 |
| password | string | password | Used to hint UIs the input needs to be obscured. |
+-------------+---------+-----------+--------------------------------------------------+

Restriction on #OneToMany mapping

I have a product class where there is an #OneToMany association for a list of buyers. What I want to do is that buyer search performed by the association when I search for a product, use a null constraint for the end date column of the Buyer table. How to do this in a list mapping like this, below.
// it would be something I needed cri.createCriteria("listaBuyer", "buyer).add(Restriction.isNull("finalDate"));
Example
Registered data
product code | initial date | final date |
-------------------------------------------------------
1 | 2016-28-07 | 2017-28-07 |
------------------------------------------------------
2 | 2016-10-08 | 2017-28-07 |
------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
Product Class
public class Product {
#OneToMany(targetEntity=Buyer.class, orphanRemoval=true, cascade={CascadeType.PERSIST,CascadeType.MERGE}, mappedBy="product")
#LazyCollection(LazyCollectionOption.FALSE)
public List<Buyer> getListaBuyer() {
if (listaBuyer == null) {
listaBuyer = new ArrayList<Buyer>();
}
return listaBuyer;
}
built criterion
Criteria cri = getSession().createCriteria(Product.class);
cri.createCriteria("status", "sta");
cri.add(Restrictions.eq("id", Product.getId()));
return cri.list();
Expected outcome
product code | initial date | final date |
-------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
Returned result
product code | initial date | final date |
-------------------------------------------------------
1 | 2016-28-07 | 2017-28-07 |
------------------------------------------------------
2 | 2016-10-08 | 2017-28-07 |
------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |

Reading XML and Creating Multiple Tables

I have an XML input file that looks like:
<mbean className="OperatingSystem">
<attribute>
<attributeName>Arch</attributeName>
<formatter>STRING</formatter>
</attribute>
<attribute>
<attributeName>ProcessCpuLoad</attributeName>
<formatType>PERCENT</formatType>
</attribute>
</mbean>
I've created a POJO called 'Mbeans' that looks like:
#XmlRootElement(name = "mbean")
#XmlAccessorType(XmlAccessType.FIELD)
public class Mbean
{
#XmlElement(name = "attribute")
private List<Attribute> attributes = null;
#XmlAttribute(name = "className")
private String className;
public String getClassName() {
return className;
}
}
I can successfully unmarshall my XML File into this POJO and my application can use this object as needed. This input file tells me the information I need to pull from a particular MBean I have. Is there a way to create multiple tables based on the XML file, such that when I pull said information I can store that information into said table structure and then use JDBC to create SQL tables on my H2 database?
For example, I would like to create tables that look like:
+------------------------+
| MBeans |
+------+-----------------+
| ID | MBeanName |
+------+-----------------+
| 1 | OperatingSystem |
+------+-----------------+
+--------------------------------+
| Attributes |
+------+--------+----------------+
| ID | MbeanId| AttributeName |
+------+--------+----------------+
| 1 | 1 | Arch |
+------+--------+----------------+
| 2 | 1 | ProcessCpuLoad |
+------+--------+----------------+
+------------------------------------+
| OperatingSystem.Arch |
+------+--------+------------+-------+
| ID | MbeanId| AttributeId| Value |
+------+--------+------------+-------+
| 1 | 1 | 1 | amd64 |
+------+--------+------------+-------+
| 2 | 1 | 1 | amd64 |
+------+--------+------------+-------+
+------------------------------------+
| OperatingSystem.ProcessCpuLoad |
+------+--------+------------+-------+
| ID | MbeanId| AttributeId| Value |
+------+--------+------------+-------+
| 1 | 1 | 2 | 0.009 |
+------+--------+------------+-------+
| 2 | 1 | 2 | 0.0691|
+------+--------+------------+-------+
I would first make :
A method mapping className into table name public String getTableName(String className)
A method mapping attributeName into clomun name public String getColumnName(String attributeName)
A method mapping formatType or formatter into the database types public String getType(String formatType)
and then
public void createTable(Mbean bean) throws SQLException{
String sql = getCreateTable(bean);
// execute SQL using JDBC...
}
private String getCreateTable(Mbean bean) {
String sqlStart = "CREATE TABLE " + getTableName(bean.getClassName()) + " (" ;
return bean.getAttributes().stream()
.map(attribute -> mapToColumn(attribute))
.collect(Collectors.joining(", ", sqlStart, ")"); // what about primary key?
}
private String mapToColumn(Attribute a) {
return getColumnName(a.getName()) + " " + getType(/*it depends*/);
}

Parsing SPARQL Result into jtable

I'm working on an Apache Jena project. I've got a Fuseki server running on my localhost.
I want to create a Java Program for my Fuseki server, that shows all the data in the triplestore in a JTable. I just have no idea how to parse the result from my query into a JTable.
My code sofar:
(left out the part where the window, table, frame etc is created)
private void Go() {
String query = "SELECT ?subject ?predicate ?object \n" +
"WHERE { \n" +
"?subject ?predicate ?object }";
Query sparqlQuery = QueryFactory.create(query, Syntax.syntaxARQ) ;
QueryEngineHTTP httpQuery = new QueryEngineHTTP("http://localhost:3030/AnimalDataSet/", sparqlQuery);
ResultSet results = httpQuery.execSelect();
System.out.println(ResultSetFormatter.asText(results));
while (results.hasNext()) {
QuerySolution solution = results.next();
}
httpQuery.close();
}
The sysout prints this, which is the correct data:
-------------------------------------------------------------------------------------------------------------------------------------
| subject | predicate | object |
=====================================================================================================================================
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#Seq> |
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_1> | <urn:animals:lion> |
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_2> | <urn:animals:tarantula> |
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_3> | <urn:animals:hippopotamus> |
| <urn:animals:lion> | <http://www.some-ficticious-zoo.com/rdf#name> | "Lion" |
| <urn:animals:lion> | <http://www.some-ficticious-zoo.com/rdf#species> | "Panthera leo" |
| <urn:animals:lion> | <http://www.some-ficticious-zoo.com/rdf#class> | "Mammal" |
| <urn:animals:tarantula> | <http://www.some-ficticious-zoo.com/rdf#name> | "Tarantula" |
| <urn:animals:tarantula> | <http://www.some-ficticious-zoo.com/rdf#species> | "Avicularia avicularia" |
| <urn:animals:tarantula> | <http://www.some-ficticious-zoo.com/rdf#class> | "Arachnid" |
| <urn:animals:hippopotamus> | <http://www.some-ficticious-zoo.com/rdf#name> | "Hippopotamus" |
| <urn:animals:hippopotamus> | <http://www.some-ficticious-zoo.com/rdf#species> | "Hippopotamus amphibius" |
| <urn:animals:hippopotamus> | <http://www.some-ficticious-zoo.com/rdf#class> | "Mammal" |
-------------------------------------------------------------------------------------------------------------------------------------
I really hope someone here knows how to parse the data from the query into a JTbale :D
Thanks in advance!
I've done some further research and finally found the solution! It's quite easy actually.
You just simply change the while loop like this:
while(rs.hasNext())
{
QuerySolution sol = rs.nextSolution();
RDFNode object = sol.get("object");
RDFNode predicate = sol.get("predicate");
RDFNode subject = sol.get("subject");
DefaultTableModel model = (DefaultTableModel) table.getModel();
model.addRow(new Object[]{subject, predicate, object});
}
And that works fine for me!
For everyone who's interested, i've puplished my version as it is now to pastebin which has comments:
The link to the full (current) version of my project

Java code in Hadoop

I am running a map only job in Hadoop. The data-set is a set of html pages in a single file (returned by a crawler)
The mapper code is written in Java. I am using JSoup to parse. What I want as my output is a key that has both the contents of the title tag and the content of a meta tag. Ideally I should get 1592 records for my map output records. I am getting 3184.
The concatenation I attempt to do with this line of code is not happening.
String MN_Job = (jobT + "\t" + jobsDetail);
What I get instead is each of these separately, hence double the number of outputs. What am I doing wrong here?
public class JobsDataMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text keytext = new Text();
private Text valuetext = new Text();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Document doc = Jsoup.parse(line);
Elements desc = doc.select("head title, meta[name=twitter:description]");
for (Element jobhtml : desc) {
Elements title = jobhtml.select("title");
String jobT = "";
for (Element titlehtml : title) {
jobT = titlehtml.text();
}
Elements meta = jobhtml.select("meta[name=twitter:description]");
String jobsDetail ="";
for (Element metahtml : meta) {
String content = metahtml.attr("content");
String content1 = content.replaceAll("\\p{Punct}+", " ");
jobsDetail = content1.replaceAll(" (?i)a | (?i)able | (?i)about | (?i)across | (?i)after | (?i)all | (?i)almost | (?i)also | (?i)am | (?i)among | (?i)an | (?i)and | (?i)any | (?i)are | (?i)as | (?i)at | (?i)be | (?i)because | (?i)been | (?i)but | (?i)by | (?i)can | (?i)cannot | (?i)could | (?i)dear | (?i)did | (?i)do | (?i)does | (?i)either | (?i)else | (?i)ever | (?i)every | (?i)for | (?i)from | (?i)get | (?i)got | (?i)had | (?i)has | (?i)have | (?i)he | (?i)her | (?i)hers | (?i)him | (?i)his | (?i)how | (?i)however | (?i)i | (?i)if | (?i)in | (?i)into | (?i)is | (?i)it | (?i)its | (?i)just | (?i)least | (?i)let | (?i)like | (?i)likely | (?i)may | (?i)me | (?i)might | (?i)most | (?i)must | (?i)my | (?i)neither | (?i)no | (?i)nor | (?i)not | (?i)nbsp | (?i)of | (?i)off | (?i)often | (?i)on | (?i)only | (?i)or | (?i)other | (?i)our | (?i)own | (?i)rather | (?i)said | (?i)say | (?i)says | (?i)she | (?i)should | (?i)since | (?i)so | (?i)some | (?i)than | (?i)that | (?i)the | (?i)their | (?i)them | (?i)then | (?i)there | (?i)these | (?i)they | (?i)this | (?i)tis | (?i)to | (?i)too | (?i)twas | (?i)us | (?i)wants | (?i)was | (?i)we | (?i)were | (?i)what | (?i)when | (?i)where | (?i)which | (?i)while | (?i)who | (?i)whom | (?i)why | (?i)will | (?i)with | (?i)would | (?i)yet | (?i)you | (?i)your "," ");
}
String IT_Job = (jobT + "\t" + jobsDetail);
keytext.set(IT_Job) ;
valuetext.set("JobDetail");
context.write( keytext, valuetext );
}
}
}
Edit: I know what the problem is. But the thing is that the solution might not be obvious in MapReduce. You might have to write your custom RecordReader. Let me explain the problem.
In your code you read line by line. Then you apply this to the line you read:
Elements desc = doc.select("head title, meta[name=twitter:description]");
But evidently, it might only have a title or a <meta name=twitter:description> tag. So you read one of those and store it. The other one remains blank. So at a time, only one of your variables, jobT and jobsDetail has any data. So for the code snippet:
String IT_Job = (jobT + "\t" + jobsDetail);
one time, the first one is blank and the second time, the other one is blank. So if you are expecting n records, you get 2n records. Similarly, if you'll attempt to extract three fields, then you should get 3n records. So you can test this theory by extracting another field and then checking if you are getting thrice the number of expected records.
If the theory turns out to be correct, you might want to delimit the webpages you extract with a specific delimiter string. Then you want to write a custom RecordReader which will read one html file at a time according to the delimiter and then process the entire html file at once. That way you'll get the title and the meta tags together.
Just by the look at the numbers: 3184/2 = 1592.
I think that your file is just duplicated in the input folder. I can't tell for sure, because you have not given the code how you submit the job, but maybe you can verify it with a simple:
bin/hadoop fs -ls /your/input_path
When submitting, either make sure that there is just the single file in there, or just reference the single file in your submission logic.
I made changes to the original code removing the loops that were not necessary. What was happening the older code was that when there is a title in the record, it is output, and later when there is a content, it is output as well. So, there are two writes per HTML file.
public class JobsDataMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text keytext = new Text();
private Text valuetext = new Text();
private String jobT = new String();
private String jobName= new String();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Document doc = Jsoup.parse(line);
Elements desc = doc.select("head title, meta[name=twitter:description]");
for (Element jobhtml : desc){
Elements title = jobhtml.select("title");
String jobTT = title.text();
jobT =jobTT ;
if (jobT.length()> 0){
jobName=jobTT;
}
Elements meta = jobhtml.select("meta[name=twitter:description]");
String jobsDetail ="";
String content = meta.attr("content");
String content1 = content.replaceAll("\\p{Punct}+", " ");
jobsDetail = content1.toLowerCase();
jobsDetail = content1.replaceAll(" a| able | about | across | after | all | almost | also | am | among | an | and | any | are | as | at | be| because | been | but | by | can | cannot | could | dear | did | do | does | either | else | ever | every | for | from | get | got | had | has | have | he | her | hers | him | his | how | however | i | if | in | into | is | it | its | just | least | let | like | likely | may | me | might | most | must | my | neither | no | nor | not | nbsp | of | off | often | on | only | or | other | our | own | rather | said | say | says | she | should | since | so | some | than | that | the | their | them | then | there | these | they | this | tis | to | too | twas | us | wants | was | we | were | what | when | where | which | while | who | whom | why | will | with | would | yet | you | your "," ");
if (jobsDetail.length()>0) {
String MN_Job = (jobName+ "\t" + jobsDetail);
keytext.set(MN_Job) ;
valuetext.set("JobInIT");
context.write( keytext, valuetext );
}
}
}
}

Categories

Resources