i Want to find out the geolocation by only providing the ip adress.
My Aim is to save city, country, postal code and other informations.
CraftPlayer cp = (CraftPlayer)p;
String adress = cp.getAddress();
Any short possibilities, to find out by only using ip?
I recommend using http://ip-api.com/docs/api:newline_separated
You can then chose what information you need and create your HTTP-link like:
http://ip-api.com/line/8.8.8.8?fields=49471
The result in this example would be:
success
United States
US
VA
Virginia
Ashburn
20149
America/New_York
So you can create a method in Java to read HTTP and split it at \n to get the lines:
private void whatever(String ip) {
String ipinfo = getHttp("http://ip-api.com/line/" + ip + "?fields=49471");
if (ipinfo == null || !ipinfo.startsWith("success")) {
// TODO: failed
return;
}
String[] lines = ipinfo.split("\n");
// TODO: now you can get the info
String country = lines[1];
/*
...
*/
}
private static String getHttp(String url) {
try {
BufferedReader br = new BufferedReader(new InputStreamReader(new URL(url).openStream()));
String line;
StringBuilder sb = new StringBuilder();
while ((line = br.readLine()) != null) {
sb.append(line).append(System.lineSeparator());
}
br.close();
return sb.toString();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
just make sure not to create to many querys in a short amount of time since ip-api.com will ban you for it.
There are a lot of websites that provide free databases for IP geolocation.
Examples include:
MaxMind
IP2Location
At the plugin startup you could download one of these databases and then query it locally during runtime.
If you choose do download the .bin format you will have to initialize a local database and then import the data. Otherwise you could just use the csv file with a Java library like opencsv.
From the documentation of opencsv:
For reading, create a bean to harbor the information you want to read,
annotate the bean fields with the opencsv annotations, then do this:
List<MyBean> beans = new CsvToBeanBuilder(FileReader("yourfile.csv"))
.withType(Visitors.class).build().parse();
Link to documentation: http://opencsv.sourceforge.net
Related
Hi I can't figure out how to verify if a user belong to one o more group under Linux os using java 7 nio library.
Can anyone help me about this issue?
You can try to read the file /etc/group.
I have developed a class to easily query this file:
public class UserInfo {
public UserInfo() throws FileNotFoundException, IOException {
this.group2users = new HashMap<>();
FileReader fileReader = new FileReader(groupsFilePath);
BufferedReader groupsReader = new BufferedReader(fileReader);
while(groupsReader.ready())
{
try
{
String line = groupsReader.readLine();
String [] tokens = line.split(":");
String groupName = tokens[0];
Set<String> users = group2users.get(groupName);
if(users == null)
{
users = new HashSet<String>();
group2users.put(groupName, users);
}
if(tokens.length>3)
{
for(String uStr: tokens[3].split(","))
users.add(uStr);
}
} catch (Exception e) { continue; }
}
groupsReader.close();
fileReader.close();
}
public boolean belongs2group(String user, String group)
{
Set<String> groupRef = group2users.get(group);
if(groupRef == null) return false;
return groupRef.contains(user);
}
private String groupsFilePath = "/etc/group";
private Map<String, Set<String>> group2users;
}
This code maps the /etc/group file and keep a map of groups-their users set.
I have developed just one query method (belongs2group) but it is fairly easy to add methods to list all groups and/or all users.
This code is written using the old-fashioned-mainstream java io-api but I think it can be easily adapted to nio. Let me know if you need me to complete that step.
I do not think that reading local /etc/passwd or /etc/group could be good idea, because nis/ldap/ipa/pam can introduce other sources of infromation about group membership.
So, it depends on you environment and some other details. E.g.:
Groups for logged in (current) user
com.sun.security.auth.module.UnixSystem().getGroups()
Hadoop
org.apache.hadoop.security.UserGroupInformation.getBestUGI(null,"root").getGroupNames()
If neither is you case
You can create jna wrapper for getgroups(2).
Or improve UnixSystem and Java_com_sun_security_auth_module_UnixSystem_getUnixInfo from jdk to take user id/name parameter.
Or rewrite some implementation of org.apache.hadoop.security.GroupMappingServiceProvider interface to not depend on hadoop environment.
My android application is calling authenticated webservice API to download and sync records from server based on type of data.
For example:
Application calls the API in loop for different content types(Commerce, Science, Arts).
Now for each content type, application maintains last sync date so that it could sync data after that date only, for last one month.
The API call looks like:
private void loadData(){
String apiUrl = "";
String[] classArray = { "Commerce", "Science", "Arts" };
try{
for(int classIndex = 0; classIndex < classArray.length; classIndex++){
apiUrl = "http://www.myserver.com/datatype?class="+classArray[classIndex]+"&syncDate="+lastSyncDate;
String responseStr = getSyncData(apiUrl);
// Code to parse the JSON data and store it in SqliteDB.
}
}catch (Exception e) {
e.printStackTrace();
}
}
private String getSyncData(String webservice){
String line = "", jsonString = "";
try{
URL url = new URL(webservice);
HttpURLConnection conn = (HttpURLConnection) url.openConnection(Proxy.NO_PROXY); //using proxy may increase latency
conn.setInstanceFollowRedirects(false);
String userName = "abc#myserver.com", password = "abc123";
String base64EncodedCredentials = Base64.encodeToString((userName
+ ":" + password).getBytes(), Base64.URL_SAFE
| Base64.NO_WRAP);
conn.setRequestProperty("Authorization", "basic "+ base64EncodedCredentials);
BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
while ((line = rd.readLine()) != null) {
// Process line...
return line;
}
rd.close();
}
}catch (Exception e) {
e.printStackTrace();
}
return jsonString;
}
This method getSyncData() returns JSON response, which is parsed and stored in SqliteDb.
This code is working fine. But there is a slight performance issue when there are more content types in the classArray and each class have large data-set.
My question is :
To improve the overall performance of this process, can I open the connection
to www.myserver.com and pass the parameters with API call in loop to stop creating connection again and again for each content type.
Here I am using HttpURLConnection for API calls but can use any other technique in java.
Main purpose is to make the connection persistent so that application should not create it again and again for each call because for every call application creates a separate connection which is consuming more time.
I've done a similar processing before with webcall -> parse JSON -> store DB -> show/update views
and with a lot of testing and debugging I found out that what actually was slowing down the process was the store DB part, nothing to do with the webcall or JSON parsing.
I solved the situation by changing it to:
webcall -> parse JSON -> fire new thread to store DB -> show/update views
and with that simple change my results started appearing in a matter of 1sec (instead of the previous 5 to 6 seconds).
Hope it helps.
edit:
regarding the connection itself, you could use web-sockets (which are persistent, but not very well implemented in Android, you'll have to do quite an amount of manual parsing), so I suggest testing the DB thing first.
To log into the google app engine api I need to provide a password. I do not want to hard code the password in the source code, so I provide a method that reads the password from a locally stored file. Is this a secure method?
I'm using the google app engine remote api, which requires to enter a username/password :
private String readPassword(){
String str = "";
try {
BufferedReader in = new BufferedReader(new FileReader("c:\\password\\file.txt"));
while ((str = in.readLine()) != null){
in.close();
}
} catch (IOException e) {
}
return str;
}
In Java, it's recommended to use a char array for storing passwords. See this SO answer for a good explanation.
In short, Strings are more vulnerable to being exposed in memory dumps, whereas char arrays can be explicitly wiped as soon as they're not needed anymore.
public class Parser {
public static void main(String[] args) {
Parser p = new Parser();
p.matchString();
}
parserObject courseObject = new parserObject();
ArrayList<parserObject> courseObjects = new ArrayList<parserObject>();
ArrayList<String> courseNames = new ArrayList<String>();
String theWebPage = " ";
{
try {
URL theUrl = new URL("http://ocw.mit.edu/courses/");
BufferedReader reader =
new BufferedReader(new InputStreamReader(theUrl.openStream()));
String str = null;
while((str = reader.readLine()) != null) {
theWebPage = theWebPage + " " + str;
}
reader.close();
} catch (MalformedURLException e) {
// do nothing
} catch (IOException e) {
// do nothing
}
}
public void matchString() {
// this is my regex that I am using to compare strings on input page
String matchRegex = "#\\w+(-\\w+)+";
Pattern p = Pattern.compile(matchRegex);
Matcher m = p.matcher(theWebPage);
int i = 0;
while (!m.hitEnd()) {
try {
System.out.println(m.group());
courseNames.add(i, m.group());
i++;
} catch (IllegalStateException e) {
// do nothing
}
}
}
}
What I am trying to achieve with the above code is to get the list of departments on the MIT OpencourseWare website. I am using a regular expression that matches the pattern of the department names as in the page source. And I am using a Pattern object and a Matcher object and trying to find() and print these department names that match the regular expression. But the code is taking forever to run and I don't think reading in a webpage using bufferedReader takes that long. So I think I am either doing something horribly wrong or parsing websites takes a ridiculously long time. so I would appreciate any input on how to improve performance or correct a mistake in my code if any. I apologize for the badly written code.
The problem is with the code
while ((str = reader.readLine()) != null)
theWebPage = theWebPage + " " +str;
The variable theWebPage is a String, which is immutable. For each line read, this code creates a new String with a copy of everything that's been read so far, with a space and the just-read line appended. This is an extraordinary amount of unnecessary copying, which is why the program is running so slow.
I downloaded the web page in question. It has 55,000 lines and is about 3.25MB in size. Not too big. But because of the copying in the loop, the first line ends up being copied about 1.5 billion times (1/2 of 55,000 squared). The program is spending all its time copying and garbage collecting. I ran this on my laptop (2.66GHz Core2Duo, 1GB heap) and it took 15 minutes to run when reading from a local file (no network latency or web crawling countermeasures).
To fix this, make theWebPage into a StringBuilder instead, and change the line in the loop to be
theWebPage.append(" ").append(str);
You can convert theWebPage to a String using toString() after the loop if you wish. When I ran the modified version, it took a fraction of a second.
BTW your code is using a bare code block within { } inside a class. This is an instance initializer (as opposed to a static initializer). It gets run at object construction time. This is legal, but it's quite unusual. Notice that it misled other commenters. I'd suggest converting this code block into a named method.
Is this your whole program? Where is the declaration of parserObject?
Also, shouldn't all of this code be in your main() prior to calling matchString()?
parserObject courseObject = new parserObject();
ArrayList<parserObject> courseObjects = new ArrayList<parserObject>();
ArrayList<String> courseNames = new ArrayList<String>();
String theWebPage=" ";
{
try {
URL theUrl = new URL("http://ocw.mit.edu/courses/");
BufferedReader reader = new BufferedReader(new InputStreamReader(theUrl.openStream()));
String str = null;
while((str = reader.readLine())!=null)
{
theWebPage = theWebPage+" "+str;
}
reader.close();
} catch (MalformedURLException e) {
} catch (IOException e) {
}
}
You are also catching exceptions and not displaying any error messages. You should always display an error message and do something when you encounter an exception. For example, if you can't download the page, there is no reason to try to parse a empty string.
From you comment I learned about static blocks in classes (thank you, didn't know about them). However, from what I've read you need to put the keyword static before the start of the block {. Also, it might just be better to put the code into your main, that way you can exit if you get a MalformedURLException or IOException.
You can, of course, solve this assignment with the limited JDK 1.0 API, and run into the issue that Stuart Marks helped you solve in his excellent answer.
Or, you just use a popular de-facto standard library, like for instance, Apache Commons IO, and read your website into a String using a no-brainer like this:
// using this...
import org.apache.commons.io.IOUtils;
// run this...
try (InputStream is = new URL("http://ocw.mit.edu/courses/").openStream()) {
theWebPage = IOUtils.toString(is);
}
I have a mailbox file containing over 50 megs of messages separated by something like this:
From - Thu Jul 19 07:11:55 2007
I want to build a regular expression for this in Java to extract each mail message one at a time, so I tried using a Scanner, using the following pattern as the delimiter:
public boolean ParseData(DataSource data_source) {
boolean is_successful_transfer = false;
String mail_header_regex = "^From\\s";
LinkedList<String> ip_addresses = new LinkedList<String>();
ASNRepository asn_repository = new ASNRepository();
try {
Pattern mail_header_pattern = Pattern.compile(mail_header_regex);
File input_file = data_source.GetInputFile();
//parse out each message from the mailbox
Scanner scanner = new Scanner(input_file);
while(scanner.hasNext(mail_header_pattern)) {
String current_line = scanner.next(mail_header_pattern);
Matcher mail_matcher = mail_header_pattern.matcher(current_line);
//read each mail message and extract the proper "received from" ip address
//to put it in our list of ip's we can add to the database to prepare
//for querying.
while(mail_matcher.find()) {
String message_text = mail_matcher.group();
String ip_address = get_ip_address(message_text);
//empty ip address means the line contains no received from
if(!ip_address.trim().isEmpty())
ip_addresses.add(ip_address);
}
}//next line
//add ip addresses from mailbox to database
is_successful_transfer = asn_repository.AddIPAddresses(ip_addresses);
}
//error reading file--unsuccessful transfer
catch(FileNotFoundException ex) {
is_successful_transfer = false;
}
return is_successful_transfer;
}
This seems like it should work, but whenever I run it, the program hangs, probably due to it not finding the pattern. This same regular expression works in Perl with the same file, but in Java it always hangs on the String current_line = scanner.next(mail_header_pattern);
Is this regular expression correct or am I parsing the file incorrectly?
I'd be leaning toward something much simpler, by just reading lines, something like this:
while(scanner.hasNextLine()) {
String line = scanner.nextLine();
if (line.matches("^From\\s.*")) {
// it's a new email
} else {
// it's still part of the email body
}
}