How can I get Country code of my sim in android code. I have used
TelephonyManager tm = (TelephonyManager)getSystemService(getApplicationContext().TELEPHONY_SERVICE);
String countryCode = tm.getNetworkCountryIso();
But here I get Country name like "BD" for Bangladesh I need +880 for bangladesh.
This code is working like same.
Locale.getDefault().getCountry(); I need code like +91 for ind,+880 for bd.
There is no direct method in the TelephonyManager class which will return you dialing code of country. You have to make a key value pair list for all country.
<string-array name="DialingCountryCode" >
<item>32,BE</item>
<item>501,BZ</item>
<item>229,BJ</item>
<item>975,BT</item>
<item>591,BO</item>
<item>387,BA</item>
<item>267,BW</item>
<item>55,BR</item>
<item>673,BN</item>
<item>359,BG</item>
<item>226,BF</item>
<item>95,MM</item>
<item>257,BI</item>
<item>855,KH</item>
<item>237,CM</item>
<item>1,CA</item>
<item>238,CV</item>
<item>236,CF</item>
<item>235,TD</item>
<item>56,CL</item>
<item>86,CN</item>
<item>61,CX</item>
<item>61,CC</item>
<item>57,CO</item>
<item>269,KM</item>
<item>242,CG</item>
<item>243,CD</item>
<item>682,CK</item>
<item>506,CR</item>
<item>385,HR</item>
<item>53,CU</item>
<item>357,CY</item>
<item>93,AF</item>
<item>355,AL</item>
<item>213,DZ</item>
<item>376,AD</item>
<item>244,AO</item>
<item>672,AQ</item>
<item>54,AR</item>
<item>374,AM</item>
<item>297,AW</item>
<item>61,AU</item>
<item>43,AT</item>
<item>994,AZ</item>
<item>973,BH</item>
<item>880,BD</item>
<item>375,BY</item>
<item>420,CZ</item>
<item>45,DK</item>
<item>253,DJ</item>
<item>670,TL</item>
<item>593,EC</item>
<item>20,EG</item>
<item>503,SV</item>
<item>240,GQ</item>
<item>358,FI</item>
<item>33,FR</item>
<item>291,ER</item>
<item>372,EE</item>
<item>251,ET</item>
<item>500,FK</item>
<item>298,FO</item>
<item>679,FJ</item>
<item>689,PF</item>
<item>241,GA</item>
<item>220,GM</item>
<item>995,GE</item>
<item>49,DE</item>
<item>233,GH</item>
<item>350,GI</item>
<item>30,GR</item>
<item>299,GL</item>
<item>502,GT</item>
<item>224,GN</item>
<item>245,GW</item>
<item>592,GY</item>
<item>509,HT</item>
<item>504,HN</item>
<item>852,HK</item>
<item>36,HU</item>
<item>91,IN</item>
<item>62,ID</item>
<item>98,IR</item>
<item>964,IQ</item>
<item>353,IE</item>
<item>44,IM</item>
<item>972,IL</item>
<item>39,IT</item>
<item>225,CI</item>
<item>81,JP</item>
<item>962,JO</item>
<item>7,KZ</item>
<item>254,KE</item>
<item>686,KI</item>
<item>965,KW</item>
<item>996,KG</item>
<item>856,LA</item>
<item>371,LV</item>
<item>961,LB</item>
<item>266,LS</item>
<item>231,LR</item>
<item>218,LY</item>
<item>423,LI</item>
<item>370,LT</item>
<item>352,LU</item>
<item>853,MO</item>
<item>389,MK</item>
<item>261,MG</item>
<item>265,MW</item>
<item>60,MY</item>
<item>960,MV</item>
<item>223,ML</item>
<item>356,MT</item>
<item>692,MH</item>
<item>222,MR</item>
<item>230,MU</item>
<item>262,YT</item>
<item>52,MX</item>
<item>691,FM</item>
<item>373,MD</item>
<item>377,MC</item>
<item>976,MN</item>
<item>382,ME</item>
<item>212,MA</item>
<item>258,MZ</item>
<item>264,NA</item>
<item>674,NR</item>
<item>977,NP</item>
<item>31,NL</item>
<item>599,AN</item>
<item>687,NC</item>
<item>64,NZ</item>
<item>505,NI</item>
<item>227,NE</item>
<item>234,NG</item>
<item>683,NU</item>
<item>850,KP</item>
<item>47,NO</item>
<item>968,OM</item>
<item>92,PK</item>
<item>680,PW</item>
<item>507,PA</item>
<item>675,PG</item>
<item>595,PY</item>
<item>51,PE</item>
<item>63,PH</item>
<item>870,PN</item>
<item>48,PL</item>
<item>351,PT</item>
<item>1,PR</item>
<item>974,QA</item>
<item>40,RO</item>
<item>7,RU</item>
<item>250,RW</item>
<item>590,BL</item>
<item>685,WS</item>
<item>378,SM</item>
<item>239,ST</item>
<item>966,SA</item>
<item>221,SN</item>
<item>381,RS</item>
<item>248,SC</item>
<item>232,SL</item>
<item>65,SG</item>
<item>421,SK</item>
<item>386,SI</item>
<item>677,SB</item>
<item>252,SO</item>
<item>27,ZA</item>
<item>82,KR</item>
<item>34,ES</item>
<item>94,LK</item>
<item>290,SH</item>
<item>508,PM</item>
<item>249,SD</item>
<item>597,SR</item>
<item>268,SZ</item>
<item>46,SE</item>
<item>41,CH</item>
<item>963,SY</item>
<item>886,TW</item>
<item>992,TJ</item>
<item>255,TZ</item>
<item>66,TH</item>
<item>228,TG</item>
<item>690,TK</item>
<item>676,TO</item>
<item>216,TN</item>
<item>90,TR</item>
<item>993,TM</item>
<item>688,TV</item>
<item>971,AE</item>
<item>256,UG</item>
<item>44,GB</item>
<item>380,UA</item>
<item>598,UY</item>
<item>1,US</item>
<item>998,UZ</item>
<item>678,VU</item>
<item>39,VA</item>
<item>58,VE</item>
<item>84,VN</item>
<item>681,WF</item>
<item>967,YE</item>
<item>260,ZM</item>
<item>263,ZW</item>
</string-array>
public static String getCountryDialCode(){
String contryId = null;
String contryDialCode = null;
TelephonyManager telephonyMngr = (TelephonyManager) this.getSystemService(Context.TELEPHONY_SERVICE);
contryId = telephonyMngr.getSimCountryIso().toUpperCase();
String[] arrContryCode=this.getResources().getStringArray(R.array.DialingCountryCode);
for(int i=0; i<arrContryCode.length; i++){
String[] arrDial = arrContryCode[i].split(",");
if(arrDial[1].trim().equals(CountryID.trim())){
contryDialCode = arrDial[0];
break;
}
}
return contryDialCode;
}
Actually libphonenumber is more convenient and is maintained:
PhoneNumberUtil.createInstance(getContext())getCountryCodeForRegion(countryNameCode)
For Android instead of using the Google library this one seems to be more lightweight: https://github.com/MichaelRocks/libphonenumber-android
The Best Way to the Get Country Code is by using this lib.
Get Complete Documentation from Github
This Library can detect country code automatically by just Adding this Attribute app:ccp_autoDetectCountry="true"
Code Example :
<com.hbb20.CountryCodePicker
android:id="#+id/ccp"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:padding="8dp"
app:ccp_showFullName="false"
app:ccp_showNameCode="false"
app:ccp_showPhoneCode="true"
app:ccp_autoDetectCountry="true"/>
read Complete Implementation Details from github
https://github.com/hbb20/CountryCodePickerProject
Related
I am trying to read the clob which is basically XML from Oracle DB and populate in AngularJS UI Grid.
I am doing the same with JSON and is working perfectly fine.
JSON response from backend
{"events":{"ORDER_NO":"BBY01-100000709660","ORDER_HEADER_KEY":"2020040811522311790606 ","CREATETS":"2020-04-08 11:52:47","TMPLT_NM":"EOMS_0194 ","EMAIL_XML":"<email CommunicationType=\"Email\" SourceSystem=\"OMS\" TemplatePageZone=\"\" brand=\"BESTBUY\" channel=\"BESTBUY\" emailAddr=\"test.tester#bestbuy.com\" template=\"EOMS_0178_TEST\">"" <name firstName=\"Test\" lastName=\"\" middleInitial=\"\"/>"" <order ATGID=\"ATG28268080246\" IsSuppressRequired=\"Y\" LoggedInFlag=\"Y\" LoyaltyID=\"0160140134\" OrderName=\"MSFTAllAccess\" PartyID=\"123456\" PriorityNumber=\"160140134\" customerPhoneNo=\"6515554321\" hasActivatedDevice=\"N\" orderDate=\"01/28/2020\" orderHeaderKey=\"2020012813423582265743\" orderIdATG=\"BBY01-1MT2010012802\" orderStatusLinkDisplayFlag=\"Y\" orderTotal=\"0.00\" orderTotalMinusCoupons=\"0.00\" partnerID=\"\" partnerOrderNo=\"MAV513281qweq1\" salesSource=\"BBYC\" shippingTotal=\"0.00\" taxTotal=\"0.00\">"" <creditCard cardType=\"\" number=\"\"/>"" <digitalCoupons digitalCouponTotal=\"0.00\"/>"" <lineItems>"" <lineItem CustPromiseDate=\"02/26/2020\" CustPromiseType=\"InHandDate\" availabilityMsg=\"\" beginEstArrivalDate=\"02/24/2020\" conditionVariableOne=\"\" conditionVariableTwo=\"\" description=\"Microsoft Surface Pro 3 12 Intel Core i7 256GB Silver\" endEstArrivalDate=\"02/26/2020\" expectedShipDays=\"\" format=\"\" giftPackaging=\"N\" inHandDate=\"02/26/2020\" itemID=\"\" itemShortDesc=\"Microsoft Surface Pro 3 12 Intel Core i7 256GB Silver\" lineItemProductTotal=\"0.00\" lineItemShippingCost=\"0.00\" merchClass=\"\" modelNo=\"1000186097\" orderLineKey=\"2020021911334791500160\" oversizeFlag=\"\" pickupDate=\"\" preOrder=\"\" primeLine=\"1\" productLine=\"6.403.635\" quantity=\"1\" releaseDate=\"\" reshipReasonCode=\"RESHIP_DAMAGED_ITEM\" shipDate=\"\" shippingMethod=\"\" signatureRequiredFlag=\"N\" sku=\"9248206\" status=\"\" subLine=\"1\" tax=\"0.00\" total=\"0.00\" unitPrice=\"0.00\" unitShippingCost=\"0.00\">"" <shippingAddr city=\"RICHFIELD\" line1=\"1000 W 78TH ST\" line2=\"\" state=\"MN\" zip=\"55423\">"" <name firstName=\"Test\" lastName=\"Tester\" middleInitial=\"\"/>"" </shippingAddr>"" <allowance allowanceAmt=\"0.00\" reason=\"\"/>"" <return date=\"\" lineQty=\"\" lineTotal=\"0.00\" productCredit=\"0.00\" reason=\"\" restockingFee=\"0.00\" shippingCredit=\"0.00\" taxCredit=\"0.00\"/>"" <cancel backOrderExtendedXNumDays=\"\" reason=\"\"/>"" <ros actualDeliveryDate=\"\" pickupDate=\"\"/>"" <store storeName=\"\" storeNum=\"\"/>"" <psp plan=\"\"/>"" <carriers>"" <carrier los=\"\" name=\"\" quantity=\"\" trackingNum=\"\"/>"" </carriers>"" </lineItem>"" </lineItems>"" <makeGood makeGoodFlag=\"N\"/>"" </order>"" <account atgProfileId=\"\" cirisID=\"\" info=\"\" password=\"\"/>"" <comments/>""</email>"}}
Whenever i am trying to read the values it is throwing exception
Unexpected string in JSON at position 372
at JSON.parse (<anonymous>)
Below is the AJAX response code:
$http.get(url).then(function(response) {
if(response.data.events == null || response.data.events == undefined ||
response.data.events == "undefined"){
$("#loader1").hide();
$scope.close = true;
$scope.responseMessage = "";
$scope.gridOptions1.data.length=0;
$scope.errorMessage = "Order not found!!!!";
}else{
console.log("1");
$("#loader1").hide();
var responseNew = JSON.stringify(response.data.events);
$scope.gridOptions1.data = responseNew;
$scope.mySelectedRows = $scope.gridApi.selection.getSelectedRows();
$scope.close = true;
$scope.errorMessage = "";
$scope.responseMessage = "Order details fetched successfully";
}
}, function(response) {
$("#loader1").hide();
$scope.close = true;
$scope.responseMessage = "";
$scope.gridOptions.data.length=0;
$scope.gridOptions1.data.length=0;
});
There's one double quote extra here:
Parse error on line 1:
...\"EOMS_0178_TEST\">"" <name firstName...
-----------------------^
Expecting 'EOF', '}', ':', ',', ']', got 'STRING'
use JSON.parse instead of JSON.stringify. The response you're getting from back-end (the one you mentioned above) is already a stringified JSON, you have to parse it out to read the values.
The above issue waswhile storing the xml in DB. since the new elements had spaces in between. it was considering that as a string and was getting appended with double quotes in JSON.
I need to train the Chunker in Opennlp to classify the training data as a noun phrase. How do I proceed? The documentation online does not have an explanation how to do it without the command line, incorporated in a program. It says to use en-chunker.train, but how do you make that file?
EDIT: #Alaye
After running the code you gave in your answer, I get the following error that I cannot fix:
Indexing events using cutoff of 5
Computing event counts... done. 3 events
Dropped event B-NP:[w_2=bos, w_1=bos, w0=He, w1=reckons, w2=., w_1=bosw0=He, w0=Hew1=reckons, t_2=bos, t_1=bos, t0=PRP, t1=VBZ, t2=., t_2=bost_1=bos, t_1=bost0=PRP, t0=PRPt1=VBZ, t1=VBZt2=., t_2=bost_1=bost0=PRP, t_1=bost0=PRPt1=VBZ, t0=PRPt1=VBZt2=., p_2=bos, p_1=bos, p_2=bosp_1=bos, p_1=bost_2=bos, p_1=bost_1=bos, p_1=bost0=PRP, p_1=bost1=VBZ, p_1=bost2=., p_1=bost_2=bost_1=bos, p_1=bost_1=bost0=PRP, p_1=bost0=PRPt1=VBZ, p_1=bost1=VBZt2=., p_1=bost_2=bost_1=bost0=PRP, p_1=bost_1=bost0=PRPt1=VBZ, p_1=bost0=PRPt1=VBZt2=., p_1=bosw_2=bos, p_1=bosw_1=bos, p_1=bosw0=He, p_1=bosw1=reckons, p_1=bosw2=., p_1=bosw_1=bosw0=He, p_1=bosw0=Hew1=reckons]
Dropped event B-VP:[w_2=bos, w_1=He, w0=reckons, w1=., w2=eos, w_1=Hew0=reckons, w0=reckonsw1=., t_2=bos, t_1=PRP, t0=VBZ, t1=., t2=eos, t_2=bost_1=PRP, t_1=PRPt0=VBZ, t0=VBZt1=., t1=.t2=eos, t_2=bost_1=PRPt0=VBZ, t_1=PRPt0=VBZt1=., t0=VBZt1=.t2=eos, p_2=bos, p_1=B-NP, p_2=bosp_1=B-NP, p_1=B-NPt_2=bos, p_1=B-NPt_1=PRP, p_1=B-NPt0=VBZ, p_1=B-NPt1=., p_1=B-NPt2=eos, p_1=B-NPt_2=bost_1=PRP, p_1=B-NPt_1=PRPt0=VBZ, p_1=B-NPt0=VBZt1=., p_1=B-NPt1=.t2=eos, p_1=B-NPt_2=bost_1=PRPt0=VBZ, p_1=B-NPt_1=PRPt0=VBZt1=., p_1=B-NPt0=VBZt1=.t2=eos, p_1=B-NPw_2=bos, p_1=B-NPw_1=He, p_1=B-NPw0=reckons, p_1=B-NPw1=., p_1=B-NPw2=eos, p_1=B-NPw_1=Hew0=reckons, p_1=B-NPw0=reckonsw1=.]
Dropped event O:[w_2=He, w_1=reckons, w0=., w1=eos, w2=eos, w_1=reckonsw0=., w0=.w1=eos, t_2=PRP, t_1=VBZ, t0=., t1=eos, t2=eos, t_2=PRPt_1=VBZ, t_1=VBZt0=., t0=.t1=eos, t1=eost2=eos, t_2=PRPt_1=VBZt0=., t_1=VBZt0=.t1=eos, t0=.t1=eost2=eos, p_2B-NP, p_1=B-VP, p_2B-NPp_1=B-VP, p_1=B-VPt_2=PRP, p_1=B-VPt_1=VBZ, p_1=B-VPt0=., p_1=B-VPt1=eos, p_1=B-VPt2=eos, p_1=B-VPt_2=PRPt_1=VBZ, p_1=B-VPt_1=VBZt0=., p_1=B-VPt0=.t1=eos, p_1=B-VPt1=eost2=eos, p_1=B-VPt_2=PRPt_1=VBZt0=., p_1=B-VPt_1=VBZt0=.t1=eos, p_1=B-VPt0=.t1=eost2=eos, p_1=B-VPw_2=He, p_1=B-VPw_1=reckons, p_1=B-VPw0=., p_1=B-VPw1=eos, p_1=B-VPw2=eos, p_1=B-VPw_1=reckonsw0=., p_1=B-VPw0=.w1=eos]
Indexing... done.
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at opennlp.tools.ml.model.AbstractDataIndexer.sortAndMerge(AbstractDataIndexer.java:89)
at opennlp.tools.ml.model.TwoPassDataIndexer.<init>(TwoPassDataIndexer.java:105)
at opennlp.tools.ml.AbstractEventTrainer.getDataIndexer(AbstractEventTrainer.java:74)
at opennlp.tools.ml.AbstractEventTrainer.train(AbstractEventTrainer.java:91)
at opennlp.tools.ml.model.TrainUtil.train(TrainUtil.java:53)
at opennlp.tools.chunker.ChunkerME.train(ChunkerME.java:253)
at com.oracle.crm.nlp.CustomChunker2.main(CustomChunker2.java:91)
Sorting and merging events... Process exited with exit code 1.
(My en-chunker.train had only the first 2 and last line of your sample data set.)
Could you please tell me why this is happening and how to fix it?
EDIT2: I got the Chunker to work, however it gives an error when I change the sentence in the training set to any sentence other than the one you've given in your answer. Can you tell me why that could be happening?
As said in Opennlp Documentation
Sample sentence of the training data:
He PRP B-NP
reckons VBZ B-VP
the DT B-NP
current JJ I-NP
account NN I-NP
deficit NN I-NP
will MD B-VP
narrow VB I-VP
to TO B-PP
only RB B-NP
# # I-NP
1.8 CD I-NP
billion CD I-NP
in IN B-PP
September NNP B-NP
. . O
This is how you make your en-chunk.train file and you can create the corresponding .bin file using CLI:
$ opennlp ChunkerTrainerME -model en-chunker.bin -lang en -data en-chunker.train -encoding
or using API
public class SentenceTrainer {
public static void trainModel(String inputFile, String modelFile)
throws IOException {
Objects.nonNull(inputFile);
Objects.nonNull(modelFile);
MarkableFileInputStreamFactory factory = new MarkableFileInputStreamFactory(
new File(inputFile));
Charset charset = Charset.forName("UTF-8");
ObjectStream<String> lineStream =
new PlainTextByLineStream(new FileInputStream("en-chunker.train"),charset);
ObjectStream<ChunkSample> sampleStream = new ChunkSampleStream(lineStream);
ChunkerModel model;
try {
model = ChunkerME.train("en", sampleStream,
new DefaultChunkerContextGenerator(), TrainingParameters.defaultParams());
}
finally {
sampleStream.close();
}
OutputStream modelOut = null;
try {
modelOut = new BufferedOutputStream(new FileOutputStream(modelFile));
model.serialize(modelOut);
} finally {
if (modelOut != null)
modelOut.close();
}
}
}
and the main method will be:
public class Main {
public static void main(String args[]) throws IOException {
String inputFile = "//path//to//data.train";
String modelFile = "//path//to//.bin";
SentenceTrainer.trainModel(inputFile, modelFile);
}
}
reference: this blog
hope this helps!
PS: collect/write the data as above in a .txt file and rename it with .train extension or even the trainingdata.txt will work. that is how you make a .train file.
I am new Stanford NLP. I can not find any good and complete documentation or tutorial. My work is to do sentiment analysis. I have a very large dataset of product reviews. I already distinguished them by positive and negative according to "starts" given by the users. Now I need to find the most occurred positive and negative adjectives as the features of my algorithm. I understand how to do tokenzation, lemmatization and POS taging from here. I got files like this.
The review was
Don't waste your money. This is a short DVD and the host is boring and offers information that is common sense to any idiot. Pass on this and buy something else. Very generic
and the output was.
Sentence #1 (6 tokens):
Don't waste your money.
[Text=Do CharacterOffsetBegin=0 CharacterOffsetEnd=2 PartOfSpeech=VBP Lemma=do]
[Text=n't CharacterOffsetBegin=2 CharacterOffsetEnd=5 PartOfSpeech=RB Lemma=not]
[Text=waste CharacterOffsetBegin=6 CharacterOffsetEnd=11 PartOfSpeech=VB Lemma=waste]
[Text=your CharacterOffsetBegin=12 CharacterOffsetEnd=16 PartOfSpeech=PRP$ Lemma=you]
[Text=money CharacterOffsetBegin=17 CharacterOffsetEnd=22 PartOfSpeech=NN Lemma=money]
[Text=. CharacterOffsetBegin=22 CharacterOffsetEnd=23 PartOfSpeech=. Lemma=.]
Sentence #2 (21 tokens):
This is a short DVD and the host is boring and offers information that is common sense to any idiot.
[Text=This CharacterOffsetBegin=24 CharacterOffsetEnd=28 PartOfSpeech=DT Lemma=this]
[Text=is CharacterOffsetBegin=29 CharacterOffsetEnd=31 PartOfSpeech=VBZ Lemma=be]
[Text=a CharacterOffsetBegin=32 CharacterOffsetEnd=33 PartOfSpeech=DT Lemma=a]
[Text=short CharacterOffsetBegin=34 CharacterOffsetEnd=39 PartOfSpeech=JJ Lemma=short]
[Text=DVD CharacterOffsetBegin=40 CharacterOffsetEnd=43 PartOfSpeech=NN Lemma=dvd]
[Text=and CharacterOffsetBegin=44 CharacterOffsetEnd=47 PartOfSpeech=CC Lemma=and]
[Text=the CharacterOffsetBegin=48 CharacterOffsetEnd=51 PartOfSpeech=DT Lemma=the]
[Text=host CharacterOffsetBegin=52 CharacterOffsetEnd=56 PartOfSpeech=NN Lemma=host]
[Text=is CharacterOffsetBegin=57 CharacterOffsetEnd=59 PartOfSpeech=VBZ Lemma=be]
[Text=boring CharacterOffsetBegin=60 CharacterOffsetEnd=66 PartOfSpeech=JJ Lemma=boring]
[Text=and CharacterOffsetBegin=67 CharacterOffsetEnd=70 PartOfSpeech=CC Lemma=and]
[Text=offers CharacterOffsetBegin=71 CharacterOffsetEnd=77 PartOfSpeech=VBZ Lemma=offer]
[Text=information CharacterOffsetBegin=78 CharacterOffsetEnd=89 PartOfSpeech=NN Lemma=information]
[Text=that CharacterOffsetBegin=90 CharacterOffsetEnd=94 PartOfSpeech=WDT Lemma=that]
[Text=is CharacterOffsetBegin=95 CharacterOffsetEnd=97 PartOfSpeech=VBZ Lemma=be]
[Text=common CharacterOffsetBegin=98 CharacterOffsetEnd=104 PartOfSpeech=JJ Lemma=common]
[Text=sense CharacterOffsetBegin=105 CharacterOffsetEnd=110 PartOfSpeech=NN Lemma=sense]
[Text=to CharacterOffsetBegin=111 CharacterOffsetEnd=113 PartOfSpeech=TO Lemma=to]
[Text=any CharacterOffsetBegin=114 CharacterOffsetEnd=117 PartOfSpeech=DT Lemma=any]
[Text=idiot CharacterOffsetBegin=118 CharacterOffsetEnd=123 PartOfSpeech=NN Lemma=idiot]
[Text=. CharacterOffsetBegin=123 CharacterOffsetEnd=124 PartOfSpeech=. Lemma=.]
Sentence #3 (8 tokens):
Pass on this and buy something else.
[Text=Pass CharacterOffsetBegin=125 CharacterOffsetEnd=129 PartOfSpeech=VB Lemma=pass]
[Text=on CharacterOffsetBegin=130 CharacterOffsetEnd=132 PartOfSpeech=IN Lemma=on]
[Text=this CharacterOffsetBegin=133 CharacterOffsetEnd=137 PartOfSpeech=DT Lemma=this]
[Text=and CharacterOffsetBegin=138 CharacterOffsetEnd=141 PartOfSpeech=CC Lemma=and]
[Text=buy CharacterOffsetBegin=142 CharacterOffsetEnd=145 PartOfSpeech=VB Lemma=buy]
[Text=something CharacterOffsetBegin=146 CharacterOffsetEnd=155 PartOfSpeech=NN Lemma=something]
[Text=else CharacterOffsetBegin=156 CharacterOffsetEnd=160 PartOfSpeech=RB Lemma=else]
[Text=. CharacterOffsetBegin=160 CharacterOffsetEnd=161 PartOfSpeech=. Lemma=.]
Sentence #4 (2 tokens):
Very generic
[Text=Very CharacterOffsetBegin=162 CharacterOffsetEnd=166 PartOfSpeech=RB Lemma=very]
[Text=generic CharacterOffsetBegin=167 CharacterOffsetEnd=174 PartOfSpeech=JJ Lemma=generic]
I already have processed 10000 positive and 10000 negative file like this. Now How can I easily find the most occurred positive and negative features(adjectives)? Do i need to read all the output(processed) file and make a list frequency count of the adjectives like this or is there any easy way by stanford corenlp?
Here is an example of processing an annotated review and storing the adjectives in a Counter.
In the example the movie review "The movie was great! It was a great film." has a sentiment of "positive".
I would suggest altering my code to load in each file and build an Annotation with the file's text and recording the sentiment for that file.
Then you can go through each file and build up a Counter with positive and negative counts for each adjective.
The final Counter has the adjective "great" with a count of 2.
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.stats.Counter;
import edu.stanford.nlp.stats.ClassicCounter;
import edu.stanford.nlp.util.CoreMap;
import java.util.Properties;
public class AdjectiveSentimentExample {
public static void main(String[] args) throws Exception {
Counter<String> adjectivePositiveCounts = new ClassicCounter<String>();
Counter<String> adjectiveNegativeCounts = new ClassicCounter<String>();
Annotation review = new Annotation("The movie was great! It was a great film.");
String sentiment = "positive";
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
pipeline.annotate(review);
for (CoreMap sentence : review.get(CoreAnnotations.SentencesAnnotation.class)) {
for (CoreLabel cl : sentence.get(CoreAnnotations.TokensAnnotation.class)) {
if (cl.get(CoreAnnotations.PartOfSpeechAnnotation.class).equals("JJ")) {
if (sentiment.equals("positive")) {
adjectivePositiveCounts.incrementCount(cl.word());
} else if (sentiment.equals("negative")) {
adjectiveNegativeCounts.incrementCount(cl.word());
}
}
}
}
System.out.println("---");
System.out.println("positive adjective counts");
System.out.println(adjectivePositiveCounts);
}
}
Is there a way to get all html5tags with one command instead of doing this? :
int h = (doc.getElementsByTag("canvas").size()
+ doc.getElementsByTag("audio").size()
+ doc.getElementsByTag("embed").size()
+ doc.getElementsByTag("source").size()
etc.
You can achieve the same result in one command using the select() method. This method allows you to specify multiple selectors, and Jsoup will return unique elements from the document that match any of the specified selectors.
For example:
int h = doc.select("canvas, audio, embed, source").size();
You can add as many comma-separated arguments as you need (e.g. all of the new elements introduced in HTML5).
Here you are :
all_html5_tags = [ '!DOCTYPE' ,'a' ,'abbr' ,'acronym' ,'address' ,'applet' ,'area' ,'article' ,'aside' ,'audio' ,'b' ,'base' ,'basefont' ,'bdi' ,'bdo' ,'big' ,'blockquote' ,'body' ,'br' ,'button' ,'canvas' ,'caption' ,'center' ,'cite' ,'code' ,'col' ,'colgroup' ,'command' ,'datalist' ,'dd' ,'del' ,'details' ,'dfn' ,'dir' ,'div' ,'dl' ,'dt' ,'em' ,'embed' ,'fieldset' ,'figcaption' ,'figure' ,'font' ,'footer' ,'form' ,'frame' ,'frameset' ,'h1' ,'h2' ,'h3' ,'h4' ,'h5' ,'h6' ,'head' ,'header' ,'hgroup' ,'hr' ,'html' ,'i' ,'iframe' ,'img' ,'input' ,'ins' ,'kbd' ,'keygen' ,'label' ,'legend' ,'li' ,'link' ,'map' ,'mark' ,'menu' ,'meta' ,'meter' ,'nav' ,'noframes' ,'noscript' ,'object' ,'ol' ,'optgroup' ,'option' ,'output' ,'p' ,'param' ,'pre' ,'progress' ,'q' ,'rp' ,'rt' ,'ruby' ,'s' ,'samp' ,'script' ,'section' ,'select' ,'small' ,'source' ,'span' ,'strike' ,'strong' ,'style' ,'sub' ,'summary' ,'sup' ,'table' ,'tbody' ,'td' ,'textarea' ,'tfoot' ,'th' ,'thead' ,'time' ,'title' ,'tr' ,'track' ,'tt' ,'u' ,'ul' ,'var' ,'video' ,'wbr' ]
I've been trying to get some informations from my Friends throught facebook4j. But every time I try to do it I receive only null pointers (except on id/name).
Here's the code:
`
Stalker(String victimUsername, String yourUsername,boolean isFriend) {
try {
friendList = facebook.getFriends(yourUsername);
if(isFriend){
int index = friendList.indexOf(facebook.getUser(victimUsername));
victimUser = friendList.get(index);
println(victimUser);
}
else
victimUser = facebook.getUser(victimUsername);
try {
birthdayVictimDate.setTime(df.parse(victimUser.getBirthday()));
}
catch(ParseException e) {
e.printStackTrace();
}
victimProfilePic = loadImage(facebook.getPictureURL(victimUsername).toString());
}
catch(FacebookException e) {
e.printStackTrace();
}
}
`
As you can see:
FriendJSONImpl extends UserJSONImpl [id=733939867, name=Tamires Ribeiro, firstName=null, middleName=null, lastName=null, gender=null, locale=null, languages=[], link=null, username=null, thirdPartyId=null, timezone=null, updatedTime=null, verified=null, bio=null, birthday=null, cover=null, education=[], email=null, hometown=null, interestedIn=[], location=null, political=null, favoriteAthletes=[], favoriteTeams=[], picture=null, quotes=null, relationshipStatus=null, religion=null, significantOther=null, videoUploadLimits=null, website=null, work=[]]
If the user isn't a friend of mine I can acess some of those informations, and I get this:
UserJSONImpl [id=733939867, name=Tamires Ribeiro, firstName=Tamires, middleName=null, lastName=Ribeiro, gender=female, locale=pt_BR, languages=[], link=https://www.facebook.com/tamiresribeirodias, username=tamiresribeirodias, thirdPartyId=null, timezone=null, updatedTime=Sat Oct 26 21:20:17 BRST 2013, verified=null, bio=null, birthday=05/13, cover=null, education=[EducationJSONImpl [year=null, type=College, school=IdNameEntityJSONImpl [name=Federal University of Rio de Janeiro, id=109442119074280], degree=null, concentration=[], classes=[], with=[]]], email=null, hometown=null, interestedIn=[], location=null, political=null, favoriteAthletes=[], favoriteTeams=[IdNameEntityJSONImpl [name=Vegetarianos Pensam Melhor, id=240337606020973], IdNameEntityJSONImpl [name=Fluminense Football Club, id=159225040801899]], picture=null, quotes=null, relationshipStatus=In a relationship, religion=null, significantOther=IdNameEntityJSONImpl [name=Pedro Gabriel Lancelloti Pinto, id=100000337547270], videoUploadLimits=null, website=null, work=[]]
As you can see, I can get lot of more informations directly trought facebook wich is confusing because as friend I should get the same or more infomartion.
Does anybody knows why this happen and how it could be solved?
PS: at my app I set on permissions "friends_birthday" and set it also at my user_token.