i find below code for Cracking Md5 hashes : (from : aboulton.blogspot.com.tr)
package md5crack;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
*
* #author Adam Boulton - Using Google to crack MD5
*/
public class MD5Cracker {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
if(args[0] == null || args[0].isEmpty())
{
System.out.println("-= Google MD5 Cracker =-");
System.out.println("-= Adam Boulton - 2010 =- ");
System.out.println("Usage: MD5crack <hash>");
}
String hash = args[0];
String url = String.format("https://www.google.com/search?q=%s", hash);
try {
URL oracle = new URL(url);
URLConnection conn = oracle.openConnection();
//keep Google happy, otherwise connection refused.
conn.setRequestProperty("user-agent", "Mozilla/5.0 Windows NT6.1 WOW64 AppleWebKit/535.7 KHTML, like Gecko Chrome/16.0.912.63 Safari/535.7");
BufferedReader in = new BufferedReader(
new InputStreamReader(
conn.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
String[] words = inputLine.split("\\s+");
for (String word : words) {
String wordHash = DigestUtils.md5Hex(word);
if (wordHash.equals(hash)) {
System.out.println("[*] Found: " + word);
System.exit(0);
}
}
}
System.out.println("[!] No results.");
in.close();
} catch (IOException ex) {
Logger.getLogger(MD5Cracker.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
but in this line i have an error :
String wordHash = DigestUtils.md5Hex(word);
Error :
DigestUtils cannot be resolved
How can i fix that ?
and this is good method or Class for crack and finding Decoded MD5 hash ?
How we can use this with Optimizing on android ?
Thanks.
Your program doesn´t know what DigestUtils is standing for.
You need to either import it, or download codec from the appache common jar and reference it as external jar into your project and import it afterwards
... and this is good method or Class for crack and finding Decoded MD5 hash?
Is it going to be effective? In general, probably not.
There are 340,282,366,920,938,463,463,374,607,431,768,211,456 possible different MD5 hashes. If you have an MD5 hash for an arbitrary string or document, then the chances of finding that hash in Google search results is vanishingly small.
Indeed, it is pretty obvious that Google's search engines wouldn't be able to index more than a miniscule fraction of the possible MD5 hashes. According to this article, the "big 4" internet companies were estimated to have ~1,200 Petabytes of storage in 2013. That's a factor of ~1020 too small just to store all possible MD5 hashes.
However, if you can identify a use-case where mappings between relevant strings and MD5 hashes are systematically published in web pages, then this approach might work ... for those strings. One such use-case would be MD5 hashes for email addresses, if someone has published the mappings in pages that Google indexes.
Related
I have setup a Java program that I made for my apprenticeship project that takes in a JSON file of English strings and outputs a different language JSON file that is defined in the console. Some languages like french and Italian will output with the correct translations whereas Russian or Japanese will output with question marks as seen in the images bellow.
I had searched around at saw that I needed to get the bytes of my string and then encode that to UTF-8 I did do this but was still getting question marks so I started to use he standard charsets built into Java and tried different ways of encoding/decoding the string I tried this:
and this gave me a different output of this : Ð?Ñ?ивеÑ?
package com.bis.propertyfiletranslator;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.security.GeneralSecurityException;
import java.util.List;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.translate.Translate;
import com.google.api.services.translate.model.TranslationsListResponse;
import com.google.api.services.translate.model.TranslationsResource;
public class Translator {
public static Translate.Translations.List list;
private static final Charset UTF_8 = Charset.forName("UTF-8");
private static final Charset ISO = Charset.forName("ISO-8859-1");
public static void translateJSONMapThroughGoogle(String input, String output, String API, String language,
List<String> subLists) throws IOException, GeneralSecurityException {
Translate t = new Translate.Builder(GoogleNetHttpTransport.newTrustedTransport(),
JacksonFactory.getDefaultInstance(), null).setApplicationName("PhoenUX-Google-Translate").build();
try {
list = t.new Translations().list(subLists, language).setFormat("text");
list.setKey(API);
} catch (GoogleJsonResponseException e) {
if (e.getDetails().getMessage().equals("Invalid Value")) {
System.err.println(
"\n Language not currently supported, check the accepted language codes and try again.\n\n Language Requested: "
+ language);
} else {
System.out.println(e.getDetails().getMessage());
}
}
for (TranslationsResource translationsResource : response.getTranslations()) {
for (String key : JSONFunctions.jsonHashMap.keySet()) {
JSONFunctions.jsonHashMap.remove(key);
String value = translationsResource.getTranslatedText();
String encoded = new String(value.getBytes(StandardCharsets.UTF_8), StandardCharsets.ISO_8859_1);
JSONFunctions.jsonHashMap.put(key, encoded);
System.out.println(encoded);
break;
}
}
JSONFunctions.outputTranslationsBackToJson(output);
}
}
So this is using the google cloud library, I added a sysout so I could see the results of what I had tried, so this code should be all you need to replicate it.
I expect the output of "Hello" to be "Привет"(russian) actual output is ???? or Ð?Ñ?ивеÑ? dependent on the encoding I use.
String encoded = new String(...) is dead wrong. Just
put(key, value):
Note that System.out.println will always have problems as the OS encoding might be some Windows ANSI encoding. Then it is likely non Unicode-capable - and String contains Unicode.
I'm complete beginner in the world of cryptography. I have 2 files - one of them is a digital signature(p7s, signedData) and file which I should verify signature with. The question is how can I do that using Java? Firstly I thought that I can do like this
String rawString = ASN1ObjectIdentifier.fromByteArray(bytesArray).toString();
String rawStringForSurname = rawString.substring(rawString.indexOf("2.5.4.4,") + 9, rawString.length());
String signSurname = rawStringForSurname.substring(0, rawStringForSurname.indexOf("]"));
String rawStringForGivenName = rawString.substring(rawString.indexOf("2.5.4.42,") + 10, rawString.length());
String signGivenName = rawStringForGivenName.substring(0, rawStringForGivenName.indexOf("]"));
Which is awful, obviously. My input data is only intended to have one file (p7s file, which is later decoded to ASN.1 and verifies surname and fullName with author's (data from outside, string)). Surprisingly it turned out, that I should have a file which I should verify sign with as well. I know that there's strange hashcodes logic(that file is intact and the sign is related EXACTLY to the file). The question is how can I retrieve this data from file and sign? And what exactly should I compare in order to accept or reject it? The library I use is Bouncy Castle.
I use Demoiselle Framework to solve this in my project.
Especially the Signer Part
Maybe this share of code can help you too
For Example:
```java
package stackoverflow.my.pack;
import java.nio.file.Path;
import java.security.PrivateKey;
import java.security.cert.X509Certificate;
import java.util.Arrays;
import org.bouncycastle.asn1.x509.Certificate;
import org.demoiselle.signer.policy.engine.factory.PolicyFactory.Policies;
import org.demoiselle.signer.policy.impl.cades.factory.PKCS7Factory;
import org.demoiselle.signer.policy.impl.cades.pkcs7.PKCS7Signer;
public class MySigner {
private static PKCS7Signer signerLoader() {
PKCS7Signer signer = PKCS7Factory.getInstance().factoryDefault();
PrivateKey pk = MyReader.getPrivateKey();// Create some method to get your PK
signer.setPrivateKey(pk);
X509Certificate certificate = MyReader.getMyPublicKey();// Create some method to get your Pub
signer.setCertificates((Certificate[]) Arrays.asList(certificate).toArray());
if (is2048(privateKey)) {
signer.setSignaturePolicy(Policies.AD_RB_CADES_2_2);
} else {
signer.setSignaturePolicy(Policies.AD_RB_CADES_1_1);
}
return signer;
}
public static byte[] myMethodToSign(byte[] fileToBeSigned) throws MySignerException {
PKCS7Signer signer = signerLoader();
return signer.doHashSign(fileToBeSigned);
}
}
```
Demoiselle use BouncyCastle too, you can see their sign method to helps you.
You have to load your PK, add BouncyCastle as your Provider, get your SignPolicy Information, validate certificate chain, get a data generator and build your attribute table.
The way Demoiselle do.
If you want to see an example of using their Signer:
https://github.com/demoiselle/signer/blob/master/signer-examples/src/main/java/org/demoiselle/signer/signer/examples/Signer.java
Basically I intend to extract the entire category tree in Wikipedia under the root node "Economics" using Wikipedia API sandbox. I don't need the content of the articles, I just need few basic details like pageid, title, revision history (at some later stage of my work). As of now I can extract it level by level but what I want is a recursive/iterative function which does it.
Each category contains a categories and articles (like each root contains nodes and leaves).
I wrote one code to extract the first level into files. one file contains the articles, second folder contains the name of categories (daughters of the root which can be further sub-classified).
Then I went into level and extracted their categories and articles and sub-categories using similar code.
The code remains similar in each case but its the scalability. I need to reach the lowest leaves of all nodes. So i need a recursion which continuously checks till the end.
I labelled files which contains categories as 'c_', so I can provide the condition while extracting different levels.
Now for some reason it has entered into a deadlock and keeps adding same things again and again. I need a way out of the deadlock.
package wikiCrawl;
import java.awt.List;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayList;
import java.util.Scanner;
import org.apache.commons.io.FileUtils;
import org.json.CDL;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
public class SubCrawl
{
public static void main(String[] args) throws IOException, InterruptedException, JSONException
{ File file = new File("C:/Users/User/Desktop/Root/Economics_2.txt");
crawlfile(file);
}
public static void crawlfile(File food) throws JSONException, IOException ,InterruptedException
{
ArrayList<String> cat_list =new ArrayList <String>();
Scanner scanner_cat = new Scanner(food);
scanner_cat.useDelimiter("\n");
while (scanner_cat.hasNext())
{
String scan_n = scanner_cat.next();
if(scan_n.indexOf(":")>-1)
cat_list.add(scan_n.substring(scan_n.indexOf(":")+1));
}
System.out.println(cat_list);
//get the categories in different languages
URL category_json;
for (int i_cat=0; i_cat<cat_list.size();i_cat++)
{
category_json = new URL("https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category%3A"+cat_list.get(i_cat).replaceAll(" ", "%20").trim()+"&cmlimit=500"); //.trim() removes trailing and following whitespaces
System.out.println(category_json);
HttpURLConnection urlConnection = (HttpURLConnection) category_json.openConnection(); //Opens the connection to the URL so clients can communicate with the resources.
BufferedReader reader = new BufferedReader (new InputStreamReader(category_json.openStream()));
String line;
String diff = "";
while ((line = reader.readLine()) != null)
{
System.out.println(line);
diff=diff+line;
}
urlConnection.disconnect();
reader.close();
JSONArray jsonarray_cat = new JSONArray (diff.substring(diff.indexOf("[{\"pageid\"")));
System.out.println(jsonarray_cat);
//Loop categories
for (int i_url = 0; i_url<jsonarray_cat.length();i_url++) //jSONarray is an array of json objects, we are looping through each object
{
//Get the URL _part (Categorie isn't correct)
int pageid=Integer.parseInt(jsonarray_cat.getJSONObject(i_url).getString("pageid")); //this can be written in a much better way
System.out.println(pageid);
String title=jsonarray_cat.getJSONObject(i_url).getString("title");
System.out.println(title);
File food_year= new File("C:/Users/User/Desktop/Root/"+cat_list.get(i_cat).replaceAll(" ", "_").trim()+".txt");
File food_year2= new File("C:/Users/User/Desktop/Root/c_"+cat_list.get(i_cat).replaceAll(" ", "_").trim()+".txt");
food_year.createNewFile();
food_year2.createNewFile();
BufferedWriter writer = new BufferedWriter (new OutputStreamWriter(new FileOutputStream(food_year, true)));
BufferedWriter writer2 = new BufferedWriter (new OutputStreamWriter(new FileOutputStream(food_year2, true)));
if (title.contains("Category:"))
{
writer2.write(pageid+";"+title);
writer2.newLine();
writer2.flush();
crawlfile(food_year2);
}
else
{
writer.write(pageid+";"+title);
writer.newLine();
writer.flush();
}
}
}
}
}
For starters this might be too big a demand on the wikimedia servers. There are over a million categories (1) and you need to read Wikipedia:Database download - Why not just retrieve data from wikipedia.org at runtime. You would need to throttle your uses to about 1 per second or risk getting blocked. This means it would take about 11 days to get the full tree.
It would be much better to use the standard dumps at https://dumps.wikimedia.org/enwiki/ these will be easier to read and process and you don't need to put a big load on the server.
Still better is to get a Wikimedia Labs account, which allow you to run queries on a replication of the database servers or scripts on the dumps without having to download some very big files.
To get just the economics categories then its easiest to go via https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Economics this has 1242 categories. You may find it easier to use the list of categories there and build the tree from there.
This will be better than a recursive approach. The problem with the wikipedia category system is that it is not really a tree, with plenty of loops. I would not be surprised if you keep following categories you will end up getting the most of wikipedia.
Write a piece of code that will query a URL that returns JSON and can parse the JSON string to pull out pieces of information. The information that should be parsed and returned is the pageid and the list of “See Also” links. Those links should be formatted to be actual links that can be used by a person to find the appropriate article.
Use the Wikipedia API for the query. A sample query is:
URL
Other queries can be generated changing the “titles” portion of the query string. The code to parse the JSON and pull the “See Also” links should be generic enough to work on any Wikipedia article.
I tried writing the below code:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import org.json.JSONException;
import org.json.JSONObject;
public class JsonRead {
private static String readUrl(String urlString) throws Exception {
BufferedReader reader = null;
try {
URL url = new URL(urlString);
reader = new BufferedReader(new InputStreamReader(url.openStream()));
StringBuffer buffer = new StringBuffer();
int read;
char[] chars = new char[1024];
while ((read = reader.read(chars)) != -1)
buffer.append(chars, 0, read);
return buffer.toString();
} finally {
if (reader != null)
reader.close();
}
}
public static void main(String[] args) throws IOException, JSONException {
JSONObject json;
try {
json = new JSONObject(readUrl("https://en.wikipedia.org/w/api.php?format=json&action=query&titles=SMALL&prop=revisions&rvprop=content"));
System.out.println(json.toString());
System.out.println(json.get("pageid"));
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
I have used the json jar from the below link in eclipse:
Json jar
When I run the above code I am getting the below error;
org.json.JSONException: JSONObject["pageid"] not found.
at org.json.JSONObject.get(JSONObject.java:471)
at JsonRead.main(JsonRead.java:35)
How can I extract the details of the pageid and also the "See Also" links from the url?
I have never worked on JSON before hence kindly let me know how to proceed here
The json:
{
"batchcomplete":"",
"query":{
"pages":{
"1808130":{
"pageid":1808130,
"ns":0,
"title":"SMALL",
"revisions":[
{
"contentformat":"text/x-wiki",
"contentmodel":"wikitext",
"*":"{{About|the ALGOL-like programming language|the scripting language formerly named Small|Pawn (scripting language)}}\n\n'''SMALL''', Small Machine Algol Like Language, is a [[computer programming|programming]] [[programming language|language]] developed by Dr. [[Nevil Brownlee]] of [[Auckland University]].\n\n==History==\nThe aim of the language was to enable people to write [[ALGOL]]-like code that ran on a small machine. It also included the '''string''' type for easier text manipulation.\n\nSMALL was used extensively from about 1980 to 1985 at [[Auckland University]] as a programming teaching aid, and for some internal projects. Originally written to run on a [[Burroughs Corporation]] B6700 [[Main frame]] in [[Fortran]] IV, subsequently rewritten in SMALL and ported to a DEC [[PDP-10]] Architecture (on the [[Operating System]] [[TOPS-10]]) and IBM S360 Architecture (on the Operating System VM/[[Conversational Monitor System|CMS]]).\n\nAbout 1985, SMALL had some [[Object-oriented programming|object-oriented]] features added to handle structures (that were missing from the early language), and to formalise file manipulation operations.\n\n==See also==\n*[[ALGOL]]\n*[[Lua (programming language)]]\n*[[Squirrel (programming language)]]\n\n==References==\n*[http://www.caida.org/home/seniorstaff/nevil.xml Nevil Brownlee]\n\n[[Category:Algol programming language family]]\n[[Category:Systems programming languages]]\n[[Category:Procedural programming languages]]\n[[Category:Object-oriented programming languages]]\n[[Category:Programming languages created in the 1980s]]"
}
]
}
}
}
}
If You Read your Exception Carefully you will find your solution at your own.
Exception in thread "main" org.json.JSONException: A JSONObject text must begin with '{' at 1 [character 2 line 1]
at org.json.JSONTokener.syntaxError(JSONTokener.java:433)
Your Exception says A JSONObject text must begin with '{' it means the the json you received from the api is probably not Correct.
So, I suggest you to debug your code and try to find out what you actually received in your String Variable jsonText.
You get the exception org.json.JSONException: JSONObject["pageid"] not found. when calling json.get("pageid") because pageid is not a direct sub-element of your root. You have to go all the way down through the object graph:
int pid = json.getJSONObject("query")
.getJSONObject("pages")
.getJSONObject("1808130")
.getInt("pageid");
If you have an array in there you will even have to iterate the array elements (or pick the one you want).
Edit Here's the code to get the field containing the 'see also' values
String s = json.getJSONObject("query")
.getJSONObject("pages")
.getJSONObject("1808130")
.getJSONArray("revisions")
.getJSONObject(0)
.getString("*");
The resulting string contains no valid JSON. You will have to parse it manually.
I searched a lot, read many blogs, articles, tutorials, but until now did not get a working example of using a Facebook account to log in to my application.
I know that I have to use OAuth, get tokens, authorizations, etc...
Can anyone share an example?
Here is how I do it on App Engine:
Step 1) Register an "app" on Facebook (cf. https://developers.facebook.com/ ). You give Facebook a name for the app and a url. The url you register is the url to the page (jsp or servlet) that you want to handle the login. From the registration you get two strings, an "app ID" and an "app secret" (the latter being your password, do not give this out or write it in html).
For this example, let's say the url I register is "http://myappengineappid.appspot.com/signin_fb.do".
2) From a webpage, say with a button, you redirect the user to the following url on Facebook, substituting your app id for "myfacebookappid" in the below example. You also have to choose which permissions (or "scopes") you want the ask the user (cf. https://developers.facebook.com/docs/reference/api/permissions/ ). In the example I ask for access to the user's email only.
(A useful thing to know is that you can also pass along an optional string that will be returned unchanged in the "state" parameter. For instance, I pass the user's datastore key, so I can retrieve the user when Facebook passes the key back to me. I do not do this in the example.)
Here is a jsp snippet:
<%#page import="java.net.URLEncoder" %>
<%
String fbURL = "http://www.facebook.com/dialog/oauth?client_id=myfacebookappid&redirect_uri=" + URLEncoder.encode("http://myappengineappid.appspot.com/signin_fb.do") + "&scope=email";
%>
<img src="/img/facebook.png" border="0" />
3) Your user will be forwarded to Facebook, and asked to approve the permissions you ask for. Then, the user will be redirected back to the url you have registered. In this example, this is "http://myappengineappid.appspot.com/signin_fb.do" which in my web.xml maps to the following servlet:
import org.json.JSONObject;
import org.json.JSONException;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.IOException;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLEncoder;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class SignInFB extends HttpServlet {
public void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
String code = req.getParameter("code");
if (code == null || code.equals("")) {
// an error occurred, handle this
}
String token = null;
try {
String g = "https://graph.facebook.com/oauth/access_token?client_id=myfacebookappid&redirect_uri=" + URLEncoder.encode("http://myappengineappid.appspot.com/signin_fb.do", "UTF-8") + "&client_secret=myfacebookappsecret&code=" + code;
URL u = new URL(g);
URLConnection c = u.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(c.getInputStream()));
String inputLine;
StringBuffer b = new StringBuffer();
while ((inputLine = in.readLine()) != null)
b.append(inputLine + "\n");
in.close();
token = b.toString();
if (token.startsWith("{"))
throw new Exception("error on requesting token: " + token + " with code: " + code);
} catch (Exception e) {
// an error occurred, handle this
}
String graph = null;
try {
String g = "https://graph.facebook.com/me?" + token;
URL u = new URL(g);
URLConnection c = u.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(c.getInputStream()));
String inputLine;
StringBuffer b = new StringBuffer();
while ((inputLine = in.readLine()) != null)
b.append(inputLine + "\n");
in.close();
graph = b.toString();
} catch (Exception e) {
// an error occurred, handle this
}
String facebookId;
String firstName;
String middleNames;
String lastName;
String email;
Gender gender;
try {
JSONObject json = new JSONObject(graph);
facebookId = json.getString("id");
firstName = json.getString("first_name");
if (json.has("middle_name"))
middleNames = json.getString("middle_name");
else
middleNames = null;
if (middleNames != null && middleNames.equals(""))
middleNames = null;
lastName = json.getString("last_name");
email = json.getString("email");
if (json.has("gender")) {
String g = json.getString("gender");
if (g.equalsIgnoreCase("female"))
gender = Gender.FEMALE;
else if (g.equalsIgnoreCase("male"))
gender = Gender.MALE;
else
gender = Gender.UNKNOWN;
} else {
gender = Gender.UNKNOWN;
}
} catch (JSONException e) {
// an error occurred, handle this
}
...
I have removed error handling code, as you may want to handle it differently than I do. (Also, "Gender" is of course a class that I have defined.) At this point, you can use the data for whatever you want, like registering a new user or look for an existing user to log in. Note that the "myfacebookappsecret" string should of course be your app secret from Facebook.
You will need the "org.json" package to use this code, which you can find at: http://json.org/java/ (just take the .java files and add them to your code in an org/json folder structure).
I hope this helps. If anything is unclear, please do comment, and I will update the answer.
Ex animo, - Alexander.
****UPDATE****
I want to add a few tidbits of information, my apologies if some of this seems a bit excessive.
To be able to log in a user by his/her Facebook account, you need to know which user in the datastore we are talking about. If it's a new user, easy, create a new user object (with a field called "facebookId", or whatever you want to call it, whose value you get from Facebook), persist it in the datastore and log the user in.
If the user exist, you need to have the field with the facebookId. When the user is redirected from Facebook, you can grab the facebookId, and look in the datastore to find the user you want to log in.
If you already have users, you will need to let them log in the way you usually do, so you know who they are, then send them to Facebook, get the facebookId back and update their user object. This way, they can log in using Facebook the next time.
Another small note: The user will be presented with a screen on Facebook asking to allow your app access to whatever scopes you ask for, there is no way around this (the less scopes you ask for, the less intrusive it seems, though). However, this only happens the first time a user is redirected (unless you ask for more scopes later, then it'll ask again).
You can try face4j https://github.com/nischal/face4j/wiki . We've used it on our product http://grabinbox.com and have open sourced it for anyone to use. It works well on GAE.
There is an example on the wiki which should help you integrate login with facebook in a few minutes.
face4j makes use of oAuth 2.0 and the facebook graph API.
I had a lot of difficulty when trying to implement the OAuth signing myself. I spent a lot of time trying to debug an issue with my tokens not actually getting authorized - a common problem apparently. Unfortunately, none of the solutions worked for me so I ended up just using Scribe, a nifty Java OAuth library that has the added benefit of supporting other providers besides for Facebook (e.g. Google, Twitter, etc.)
You can take a look at LeanEngine, the server part: https://github.com/leanengine/LeanEngine-Server/tree/master/lean-server-lib/src/main/java/com/leanengine/server/auth
Check facebook's java APIs.
Other examples: http://code.google.com/p/facebook-java-api/wiki/Examples