How to parse a cookie string - java

I would like to take a Cookie string (as it might be returned in a Set-Cookie header) and be able to easily modify parts of it, specifically the expiration date.
I see there are several different Cookie classes, such as BasicClientCookie, available but I don't see any easy way to parse the string into one of those objects.
I see in api level 9 they added HttpCookie which has a parse method, but I need something to work in previous versions.
Any ideas?
Thanks

How about java.net.HttpCookie:
List<HttpCookie> cookies = HttpCookie.parse(header);

I believe you'll have to parse it out manually. Try this:
BasicClientCookie parseRawCookie(String rawCookie) throws Exception {
String[] rawCookieParams = rawCookie.split(";");
String[] rawCookieNameAndValue = rawCookieParams[0].split("=");
if (rawCookieNameAndValue.length != 2) {
throw new Exception("Invalid cookie: missing name and value.");
}
String cookieName = rawCookieNameAndValue[0].trim();
String cookieValue = rawCookieNameAndValue[1].trim();
BasicClientCookie cookie = new BasicClientCookie(cookieName, cookieValue);
for (int i = 1; i < rawCookieParams.length; i++) {
String rawCookieParamNameAndValue[] = rawCookieParams[i].trim().split("=");
String paramName = rawCookieParamNameAndValue[0].trim();
if (paramName.equalsIgnoreCase("secure")) {
cookie.setSecure(true);
} else {
if (rawCookieParamNameAndValue.length != 2) {
throw new Exception("Invalid cookie: attribute not a flag or missing value.");
}
String paramValue = rawCookieParamNameAndValue[1].trim();
if (paramName.equalsIgnoreCase("expires")) {
Date expiryDate = DateFormat.getDateTimeInstance(DateFormat.FULL)
.parse(paramValue);
cookie.setExpiryDate(expiryDate);
} else if (paramName.equalsIgnoreCase("max-age")) {
long maxAge = Long.parseLong(paramValue);
Date expiryDate = new Date(System.getCurrentTimeMillis() + maxAge);
cookie.setExpiryDate(expiryDate);
} else if (paramName.equalsIgnoreCase("domain")) {
cookie.setDomain(paramValue);
} else if (paramName.equalsIgnoreCase("path")) {
cookie.setPath(paramValue);
} else if (paramName.equalsIgnoreCase("comment")) {
cookie.setPath(paramValue);
} else {
throw new Exception("Invalid cookie: invalid attribute name.");
}
}
}
return cookie;
}
I haven't actually compiled or run this code, but it should be a strong start. You'll probably have to mess with the date parsing a bit: I'm not sure that the date format used in cookies is actually the same as DateFormat.FULL. (Check out this related question, which addresses handling the date format in cookies.) Also, note that there are some cookie attributes not handled by BasicClientCookie such as version and httponly.
Finally, this code assumes that the name and value of the cookie appear as the first attribute: I'm not sure if that's necessarily true, but that's how every cookie I've ever seen is ordered.

You can use Apache HttpClient's facilities for that.
Here's an excerpt from CookieJar:
CookieSpec cookieSpec = new BrowserCompatSpec();
List<Cookie> parseCookies(URI uri, List<String> cookieHeaders) {
ArrayList<Cookie> cookies = new ArrayList<Cookie>();
int port = (uri.getPort() < 0) ? 80 : uri.getPort();
boolean secure = "https".equals(uri.getScheme());
CookieOrigin origin = new CookieOrigin(uri.getHost(), port,
uri.getPath(), secure);
for (String cookieHeader : cookieHeaders) {
BasicHeader header = new BasicHeader(SM.SET_COOKIE, cookieHeader);
try {
cookies.addAll(cookieSpec.parse(header, origin));
} catch (MalformedCookieException e) {
L.d(e);
}
}
return cookies;
}

Funny enough, but java.net.HttpCookie class cannot parse cookie strings with domain and/or path parts that this exact java.net.HttpCookie class has converted to strings.
For example:
HttpCookie newCookie = new HttpCookie("cookieName", "cookieValue");
newCookie.setDomain("cookieDomain.com");
newCookie.setPath("/");
As this class implements neither Serializable nor Parcelable, it's tempting to store cookies as strings. So you write something like:
saveMyCookieAsString(newCookie.toString());
This statement will save the cookie in the following format:
cookieName="cookieValue";$Path="/";$Domain="cookiedomain.com"
And then you want to restore this cookie, so you get the string:
String cookieAsString = restoreMyCookieString();
and try to parse it:
List<HttpCookie> cookiesList = HttpCookie.parse(cookieAsString);
StringBuilder myCookieAsStringNow = new StringBuilder();
for(HttpCookie httpCookie: cookiesList) {
myCookieAsStringNow.append(httpCookie.toString());
}
now myCookieAsStringNow.toString(); produces
cookieName=cookieValue
Domain and path parts are just gone. The reason: parse method is case sensitive to words like "domain" and "path". Possible workaround: provide another toString() method like:
public static String httpCookieToString(HttpCookie httpCookie) {
StringBuilder result = new StringBuilder()
.append(httpCookie.getName())
.append("=")
.append("\"")
.append(httpCookie.getValue())
.append("\"");
if(!TextUtils.isEmpty(httpCookie.getDomain())) {
result.append("; domain=")
.append(httpCookie.getDomain());
}
if(!TextUtils.isEmpty(httpCookie.getPath())){
result.append("; path=")
.append(httpCookie.getPath());
}
return result.toString();
}
I find it funny (especially, for classes like java.net.HttpCookie which are aimed to be used by a lot and lot of people) and I hope it will be useful for someone.

With a regular expression like :
([^=]+)=([^\;]+);\s?
you can parse a cookie like this :
.COOKIEAUTH=5DEF0BF530F749AD46F652BDF31C372526A42FEB9D40162167CB39C4D43FC8AF1C4B6DF0C24ECB1945DFF7952C70FDA1E4AF12C1803F9D089E78348C4B41802279897807F85905D6B6D2D42896BA2A267E9F564814631B4B31EE41A483C886B14B5A1E76FD264FB230E87877CB9A4A2A7BDB0B0101BC2C1AF3A029CC54EE4FBC;
expires=Sat, 30-Jul-2011 01:22:34 GMT;
path=/; HttpOnly
in a few lines of code.

If you happen to have Netty HTTP codec installed, you can also use io.netty.handler.codec.http.cookie.ServerCookieDecoder.LAX|STRICT. Very convenient.

The advantage of Yanchenko's approach with Apache Http client is that is validates the cookies consistent with the spec based on the origin. The regular expression approach won't do that, but perhaps you don't need to.
public class CookieUtil {
public List<Cookie> parseCookieString(String cookies) {
List<Cookie> cookieList = new ArrayList<Cookie>();
Pattern cookiePattern = Pattern.compile("([^=]+)=([^\\;]*);?\\s?");
Matcher matcher = cookiePattern.matcher(cookies);
while (matcher.find()) {
int groupCount = matcher.groupCount();
System.out.println("matched: " + matcher.group(0));
for (int groupIndex = 0; groupIndex <= groupCount; ++groupIndex) {
System.out.println("group[" + groupIndex + "]=" + matcher.group(groupIndex));
}
String cookieKey = matcher.group(1);
String cookieValue = matcher.group(2);
Cookie cookie = new BasicClientCookie(cookieKey, cookieValue);
cookieList.add(cookie);
}
return cookieList;
}
}
I've attached a small example using yanchenkos regex. It needs to be tweaked just a little. Without the '?' quantity modifer on the trailing ';' the trailing attribute for any cookie will not be matched. After that, if you care about the other attributes you can use Doug's code, properly encapsulated, to parse the other match groups.
Edit: Also, note '*' qualifier on the value of the cookie itself. Values are optional and you can get cookies like "de=", i.e. no value at all. Looking at the regex again, I don't think it will handle the secure and discard cookie attributes which do not have an '='.

If you want to parse to javax.servlet.http.Cookie, you may first parse to java.net.HttpCookie and then convert to Cookie. But theoretically it's may be incompatible because of cookie's version.
HttpCookie httpCookie = ...
Cookie cookie = toServletCookie(httpCookie);
private static boolean isNotEmpty(String str) {
return !(str == null || str.trim().isEmpty());
}
public static Cookie toServletCookie(HttpCookie httpCookie) {
Cookie cookie = new Cookie(httpCookie.getName(), httpCookie.getValue());
if (isNotEmpty(httpCookie.getDomain())) {
cookie.setDomain(httpCookie.getDomain());
}
if (isNotEmpty(httpCookie.getPath())) {
cookie.setPath(httpCookie.getPath());
}
cookie.setHttpOnly(httpCookie.isHttpOnly());
cookie.setSecure(httpCookie.getSecure());
if (isNotEmpty(httpCookie.getComment())) {
cookie.setComment(httpCookie.getComment());
}
cookie.setMaxAge((int) Math.min(httpCookie.getMaxAge(), Integer.MAX_VALUE));
return cookie;
}

CookieManager cookieManager = new CookieManager();
CookieHandler.setDefault(cookieManager);
HttpCookie cookie = new HttpCookie("lang", "en");
cookie.setDomain("Your URL");
cookie.setPath("/");
cookie.setVersion(0);
cookieManager.getCookieStore().add(new URI("https://Your URL/"), cookie);
List<HttpCookie> Cookies = cookieManager.getCookieStore().get(new URI("https://Your URL/"));
String s = Cookies.get(0).getValue();

val headers = ..........
val headerBuilder = Headers.Builder()
headers?.forEach {
val values = it.split(";")
values.forEach { v ->
if (v.contains("=")) {
headerBuilder.add(v.replace("=", ":"))
}
}
}
val headers = headerBuilder.build()

Related

Is there a way to verify that a key is equal to a value after iterating with a for-each loop

In an API test, my intention is to assert that headers are returned in the response payload. Using RESTassured, I have successfully collected the headers and iterate through the key-value pairs using a for-each loop. I did this:
//Capture all the headers from the response
Headers allheaders = response.headers();
for (Header header : allheaders)
System.out.println(header.getName() + " " +header.getValue());
The headers from the response payload look something like this:
Content-Type text/plain;charset=UTF-8
Content-Length 2
Date Mon, 15 Mar 2021 12:33:57 GMT
Now, what I want to do is to assert that a certain key is present and it is equal to a certain value e.g.Content-Type isEqual to text/plain;charset=UTF-8, but not sure how to resolve it. There are 3 key-value pairs and I am not sure how to get the index of a pair and so that I can verify that the key is equal to the corresponding value. Any ideas?
Try something like this:
Optional header = allHeaders.stream().filter(h -> Objects.equals("expected", h.getName()).findAny().get();
assertTrue(header.isPresent());
If you think you will be using this a lot, first, create a matcher for your headers:
class HeaderMatcher extends BaseMatcher<Header> {
private final String key;
private final String value;
// Returns a new matcher based on the expected parameters
public static HeaderMatcher ofKeyAndValue(String key, String value) {
return new HeaderMatcher(key, value);
}
public HeaderMatcher(String key, String value) {
this.key = key;
this.value = value;
}
// Does the actual matching
#Override
public boolean matches(Object item) {
if (item instanceof Header) {
final Header header = (Header) item;
return key.equals(header.name) &&
value.equals(header.value);
} else {
return false;
}
}
}
Then you can use that matcher wherever Hamcrest matchers are accepted:
assertThat(headerList, CoreMatchers.hasItem(ofKeyAndValue("key", "value")));
assertThat(header, CoreMatchers.is(ofKeyAndValue("key", "value")));

Java GSON check data

I'm having trouble with gson:
For example I have this output from website:
[["connected"], ["user1":"Hello"], ["user2":"Hey"], ["disconnected"]]
But I want parse this JSON and output something like this:
connected
user1 says: Hello
user2 says: Hey
disconnected
I quicly wrote this code:
public static void PrintEvents(String id){
String response = Post.getResponse(Server()+"events?id="+id,"");
// response is [["connected"],["user1":"Hello"],["user2":"Hey"],["disconnected"]]
JsonElement parse = (new JsonParser()).parse(response); //found this in internet
int bound = ????????????; // Should be 4
for (int i=1;i<=bound;i++){
String data = ???????????;
if (data == "connected" || data == "disconnected") then {
System.out.println(data);
}else if(?????==2){// to check how many strings there is, if it's ["abc","def"] or ["abc"]
String data2 = ??????????????;
System.out.println(data+" says: "+data2);
}else{
//something else
}
};
}
What should I insert to these parts with question marks to make code work?
I cannot find any way to make it work...
Sorry for my bad English.
EDIT: Changed response to [["connected"], ["user1","Hello"], ["user2","Hey"], ["disconnected"]]. Earlier response was not valid JSON.
The response that you have pasted is not a valid json. paste it in http://www.jsoneditoronline.org/ and see the error.
Please find the below code snippet:
public static void printEvents(String id)
{
String response = "[[\"connected\"] ,[\"user1:Hello\"],[\"user2:Hey\"],[\"disconnected\"]]";
JsonElement parse = (new JsonParser()).parse(response); //found this in internet
int bound = ((JsonArray)parse).size(); // Should be 4
for (int i = 0; i < bound; i++) {
String data = ((JsonArray)parse).get(0).getAsString();
if (data.equals("connected") || data.equals("disconnected")) {
System.out.println(data);
continue;
}
String[] splittedData = data.split(":");
if (splittedData.length
== 2) {// to check how many strings there is, if it's ["abc","def"] or ["abc"]
System.out.println(splittedData[0] + " says: " + splittedData[1]);
}
/*
*else{
* your else logic goes here
* }
* */
}
}
Couple of suggestions:
If you are new to json world, use jackson instead of Gson.
the response is not a good design. Slightly correct json:
{
"firstKey": "connected",
"userResponses": [
{
"user1": "hey"
},
{
"user2": "hi"
}
],
"lastKey": "disconnected"
}
Also try to define pojos , instead of working inline with json.
You need to define a separate class like this:
class MyClass{
String name;
String value;
}
and then:
List<MyClass> myclasses = new Gson().fromJson(response, new TypeToken<List<MyClass>>(){}.getType());
then
for(MyClass myclass: myclasses){
...
}

Get domain name from given url

Given a URL, I want to extract domain name(It should not include 'www' part). Url can contain http/https. Here is the java code that I wrote. Though It seems to work fine, is there any better approach or are there some edge cases, that could fail.
public static String getDomainName(String url) throws MalformedURLException{
if(!url.startsWith("http") && !url.startsWith("https")){
url = "http://" + url;
}
URL netUrl = new URL(url);
String host = netUrl.getHost();
if(host.startsWith("www")){
host = host.substring("www".length()+1);
}
return host;
}
Input: http://google.com/blah
Output: google.com
If you want to parse a URL, use java.net.URI. java.net.URL has a bunch of problems -- its equals method does a DNS lookup which means code using it can be vulnerable to denial of service attacks when used with untrusted inputs.
"Mr. Gosling -- why did you make url equals suck?" explains one such problem. Just get in the habit of using java.net.URI instead.
public static String getDomainName(String url) throws URISyntaxException {
URI uri = new URI(url);
String domain = uri.getHost();
return domain.startsWith("www.") ? domain.substring(4) : domain;
}
should do what you want.
Though It seems to work fine, is there any better approach or are there some edge cases, that could fail.
Your code as written fails for the valid URLs:
httpfoo/bar -- relative URL with a path component that starts with http.
HTTP://example.com/ -- protocol is case-insensitive.
//example.com/ -- protocol relative URL with a host
www/foo -- a relative URL with a path component that starts with www
wwwexample.com -- domain name that does not starts with www. but starts with www.
Hierarchical URLs have a complex grammar. If you try to roll your own parser without carefully reading RFC 3986, you will probably get it wrong. Just use the one that's built into the core libraries.
If you really need to deal with messy inputs that java.net.URI rejects, see RFC 3986 Appendix B:
Appendix B. Parsing a URI Reference with a Regular Expression
As the "first-match-wins" algorithm is identical to the "greedy"
disambiguation method used by POSIX regular expressions, it is
natural and commonplace to use a regular expression for parsing the
potential five components of a URI reference.
The following line is the regular expression for breaking-down a
well-formed URI reference into its components.
^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
12 3 4 5 6 7 8 9
The numbers in the second line above are only to assist readability;
they indicate the reference points for each subexpression (i.e., each
paired parenthesis).
import java.net.*;
import java.io.*;
public class ParseURL {
public static void main(String[] args) throws Exception {
URL aURL = new URL("http://example.com:80/docs/books/tutorial"
+ "/index.html?name=networking#DOWNLOADING");
System.out.println("protocol = " + aURL.getProtocol()); //http
System.out.println("authority = " + aURL.getAuthority()); //example.com:80
System.out.println("host = " + aURL.getHost()); //example.com
System.out.println("port = " + aURL.getPort()); //80
System.out.println("path = " + aURL.getPath()); // /docs/books/tutorial/index.html
System.out.println("query = " + aURL.getQuery()); //name=networking
System.out.println("filename = " + aURL.getFile()); ///docs/books/tutorial/index.html?name=networking
System.out.println("ref = " + aURL.getRef()); //DOWNLOADING
}
}
Read more
Here is a short and simple line using InternetDomainName.topPrivateDomain() in Guava: InternetDomainName.from(new URL(url).getHost()).topPrivateDomain().toString()
Given http://www.google.com/blah, that will give you google.com. Or, given http://www.google.co.mx, it will give you google.co.mx.
As Sa Qada commented in another answer on this post, this question has been asked earlier: Extract main domain name from a given url. The best answer to that question is from Satya, who suggests Guava's InternetDomainName.topPrivateDomain()
public boolean isTopPrivateDomain()
Indicates whether this domain name is composed of exactly one
subdomain component followed by a public suffix. For example, returns
true for google.com and foo.co.uk, but not for www.google.com or
co.uk.
Warning: A true result from this method does not imply that the
domain is at the highest level which is addressable as a host, as many
public suffixes are also addressable hosts. For example, the domain
bar.uk.com has a public suffix of uk.com, so it would return true from
this method. But uk.com is itself an addressable host.
This method can be used to determine whether a domain is probably the
highest level for which cookies may be set, though even that depends
on individual browsers' implementations of cookie controls. See RFC
2109 for details.
Putting that together with URL.getHost(), which the original post already contains, gives you:
import com.google.common.net.InternetDomainName;
import java.net.URL;
public class DomainNameMain {
public static void main(final String... args) throws Exception {
final String urlString = "http://www.google.com/blah";
final URL url = new URL(urlString);
final String host = url.getHost();
final InternetDomainName name = InternetDomainName.from(host).topPrivateDomain();
System.out.println(urlString);
System.out.println(host);
System.out.println(name);
}
}
I wrote a method (see below) which extracts a url's domain name and which uses simple String matching. What it actually does is extract the bit between the first "://" (or index 0 if there's no "://" contained) and the first subsequent "/" (or index String.length() if there's no subsequent "/"). The remaining, preceding "www(_)*." bit is chopped off. I'm sure there'll be cases where this won't be good enough but it should be good enough in most cases!
Mike Samuel's post above says that the java.net.URI class could do this (and was preferred to the java.net.URL class) but I encountered problems with the URI class. Notably, URI.getHost() gives a null value if the url does not include the scheme, i.e. the "http(s)" bit.
/**
* Extracts the domain name from {#code url}
* by means of String manipulation
* rather than using the {#link URI} or {#link URL} class.
*
* #param url is non-null.
* #return the domain name within {#code url}.
*/
public String getUrlDomainName(String url) {
String domainName = new String(url);
int index = domainName.indexOf("://");
if (index != -1) {
// keep everything after the "://"
domainName = domainName.substring(index + 3);
}
index = domainName.indexOf('/');
if (index != -1) {
// keep everything before the '/'
domainName = domainName.substring(0, index);
}
// check for and remove a preceding 'www'
// followed by any sequence of characters (non-greedy)
// followed by a '.'
// from the beginning of the string
domainName = domainName.replaceFirst("^www.*?\\.", "");
return domainName;
}
I made a small treatment after the URI object creation
if (url.startsWith("http:/")) {
if (!url.contains("http://")) {
url = url.replaceAll("http:/", "http://");
}
} else {
url = "http://" + url;
}
URI uri = new URI(url);
String domain = uri.getHost();
return domain.startsWith("www.") ? domain.substring(4) : domain;
In my case i only needed the main domain and not the subdomain (no "www" or whatever the subdomain is) :
public static String getUrlDomain(String url) throws URISyntaxException {
URI uri = new URI(url);
String domain = uri.getHost();
String[] domainArray = domain.split("\\.");
if (domainArray.length == 1) {
return domainArray[0];
}
return domainArray[domainArray.length - 2] + "." + domainArray[domainArray.length - 1];
}
With this method the url "https://rest.webtoapp.io/llSlider?lg=en&t=8" will have for domain "webtoapp.io".
val host = url.split("/")[2]
All the above are good. This one seems really simple to me and easy to understand. Excuse the quotes. I wrote it for Groovy inside a class called DataCenter.
static String extractDomainName(String url) {
int start = url.indexOf('://')
if (start < 0) {
start = 0
} else {
start += 3
}
int end = url.indexOf('/', start)
if (end < 0) {
end = url.length()
}
String domainName = url.substring(start, end)
int port = domainName.indexOf(':')
if (port >= 0) {
domainName = domainName.substring(0, port)
}
domainName
}
And here are some junit4 tests:
#Test
void shouldFindDomainName() {
assert DataCenter.extractDomainName('http://example.com/path/') == 'example.com'
assert DataCenter.extractDomainName('http://subpart.example.com/path/') == 'subpart.example.com'
assert DataCenter.extractDomainName('http://example.com') == 'example.com'
assert DataCenter.extractDomainName('http://example.com:18445/path/') == 'example.com'
assert DataCenter.extractDomainName('example.com/path/') == 'example.com'
assert DataCenter.extractDomainName('example.com') == 'example.com'
}
try this one : java.net.URL;
JOptionPane.showMessageDialog(null, getDomainName(new URL("https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains")));
public String getDomainName(URL url){
String strDomain;
String[] strhost = url.getHost().split(Pattern.quote("."));
String[] strTLD = {"com","org","net","int","edu","gov","mil","arpa"};
if(Arrays.asList(strTLD).indexOf(strhost[strhost.length-1])>=0)
strDomain = strhost[strhost.length-2]+"."+strhost[strhost.length-1];
else if(strhost.length>2)
strDomain = strhost[strhost.length-3]+"."+strhost[strhost.length-2]+"."+strhost[strhost.length-1];
else
strDomain = strhost[strhost.length-2]+"."+strhost[strhost.length-1];
return strDomain;}
There is a similar question Extract main domain name from a given url. If you take a look at this answer , you will see that it is very easy. You just need to use java.net.URL and String utility - Split
One of the way I did and worked for all of the cases is using Guava Library and regex in combination.
public static String getDomainNameWithGuava(String url) throws MalformedURLException,
URISyntaxException {
String host =new URL(url).getHost();
String domainName="";
try{
domainName = InternetDomainName.from(host).topPrivateDomain().toString();
}catch (IllegalStateException | IllegalArgumentException e){
domainName= getDomain(url,true);
}
return domainName;
}
getDomain() can be any common method with regex.
private static final String hostExtractorRegexString = "(?:https?://)?(?:www\\.)?(.+\\.)(com|au\\.uk|co\\.in|be|in|uk|org\\.in|org|net|edu|gov|mil)";
private static final Pattern hostExtractorRegexPattern = Pattern.compile(hostExtractorRegexString);
public static String getDomainName(String url){
if (url == null) return null;
url = url.trim();
Matcher m = hostExtractorRegexPattern.matcher(url);
if(m.find() && m.groupCount() == 2) {
return m.group(1) + m.group(2);
}
return null;
}
Explanation :
The regex has 4 groups. The first two are non-matching groups and the next two are matching groups.
The first non-matching group is "http" or "https" or ""
The second non-matching group is "www." or ""
The second matching group is the top level domain
The first matching group is anything after the non-matching groups and anything before the top level domain
The concatenation of the two matching groups will give us the domain/host name.
PS : Note that you can add any number of supported domains to the regex.
If the input url is user input. this method gives the most appropriate host name. if not found gives back the input url.
private String getHostName(String urlInput) {
urlInput = urlInput.toLowerCase();
String hostName=urlInput;
if(!urlInput.equals("")){
if(urlInput.startsWith("http") || urlInput.startsWith("https")){
try{
URL netUrl = new URL(urlInput);
String host= netUrl.getHost();
if(host.startsWith("www")){
hostName = host.substring("www".length()+1);
}else{
hostName=host;
}
}catch (MalformedURLException e){
hostName=urlInput;
}
}else if(urlInput.startsWith("www")){
hostName=urlInput.substring("www".length()+1);
}
return hostName;
}else{
return "";
}
}
To get the actual domain name, without the subdomain, I use:
private String getDomainName(String url) throws URISyntaxException {
String hostName = new URI(url).getHost();
if (!hostName.contains(".")) {
return hostName;
}
String[] host = hostName.split("\\.");
return host[host.length - 2];
}
Note that this won't work with second-level domains (like .co.uk).
// groovy
String hostname ={url -> url[(url.indexOf('://')+ 3)..-1]​.split('/')[0]​ }
hostname('http://hello.world.com/something') // return 'hello.world.com'
hostname('docker://quay.io/skopeo/stable') // return 'quay.io'
const val WWW = "www."
fun URL.domain(): String {
val domain: String = this.host
return if (domain.startsWith(ConstUtils.WWW)) {
domain.substring(ConstUtils.WWW.length)
} else {
domain
}
}
I use regex solution
public static String getDomainName(String url) {
return url.replaceAll("http(s)?://|www\\.|wap\\.|/.*", "");
}
It cleans url from "http/https/www./wap." and from all unnecessary things after / like "/questions" in "https://stackoverflow.com/questions" and we get just "stackoverflow.com"

Parse Accept-Language header in Java

The accept-language header in request is usually a long complex string -
Eg.
Accept-Language : en-ca,en;q=0.8,en-us;q=0.6,de-de;q=0.4,de;q=0.2
Is there a simple way to parse it in java? Or a API to help me do that?
I would suggest using ServletRequest.getLocales() to let the container parse Accept-Language rather than trying to manage the complexity yourself.
For the record, now it is possible with Java 8:
Locale.LanguageRange.parse()
Here's an alternative way to parse the Accept-Language header which doesn't require a servlet container:
String header = "en-ca,en;q=0.8,en-us;q=0.6,de-de;q=0.4,de;q=0.2";
for (String str : header.split(",")){
String[] arr = str.trim().replace("-", "_").split(";");
//Parse the locale
Locale locale = null;
String[] l = arr[0].split("_");
switch(l.length){
case 2: locale = new Locale(l[0], l[1]); break;
case 3: locale = new Locale(l[0], l[1], l[2]); break;
default: locale = new Locale(l[0]); break;
}
//Parse the q-value
Double q = 1.0D;
for (String s : arr){
s = s.trim();
if (s.startsWith("q=")){
q = Double.parseDouble(s.substring(2).trim());
break;
}
}
//Print the Locale and associated q-value
System.out.println(q + " - " + arr[0] + "\t " + locale.getDisplayLanguage());
}
You can find an explanation of the Accept-Language header and associated q-values here:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Many thanks to Karl Knechtel and Mike Samuel. Thier comments to the original question helped point me in the right direction.
We are using Spring boot and Java 8. This works
In ApplicationConfig.java write this
#Bean
public LocaleResolver localeResolver() {
return new SmartLocaleResolver();
}
and I have this list in my constants class that has languages that we support
List<Locale> locales = Arrays.asList(new Locale("en"),
new Locale("es"),
new Locale("fr"),
new Locale("es", "MX"),
new Locale("zh"),
new Locale("ja"));
and write the logic in the below class.
public class SmartLocaleResolver extends AcceptHeaderLocaleResolver {
#Override
public Locale resolveLocale(HttpServletRequest request) {
if (StringUtils.isBlank(request.getHeader("Accept-Language"))) {
return Locale.getDefault();
}
List<Locale.LanguageRange> ranges = Locale.LanguageRange.parse("da,es-MX;q=0.8");
Locale locale = Locale.lookup(ranges, locales);
return locale ;
}
}
ServletRequest.getLocale() is certainly the best option if it is available and not overwritten as some frameworks do.
For all other cases Java 8 offers Locale.LanguageRange.parse() as previously mentioned by Quiang Li. This however only gives back a Language String, not a Locale. To parse the language strings you can use Locale.forLanguageTag() (available since Java 7):
final List<Locale> acceptedLocales = new ArrayList<>();
final String userLocale = request.getHeader("Accept-Language");
if (userLocale != null) {
final List<LanguageRange> ranges = Locale.LanguageRange.parse(userLocale);
if (ranges != null) {
ranges.forEach(languageRange -> {
final String localeString = languageRange.getRange();
final Locale locale = Locale.forLanguageTag(localeString);
acceptedLocales.add(locale);
});
}
}
return acceptedLocales;
Locale.forLanguageTag("en-ca,en;q=0.8,en-us;q=0.6,de-de;q=0.4,de;q=0.2")
The above solutions lack some kind of validation. Using ServletRequest.getLocale() returns the server locale if the user does not provides a valid one.
Our websites lately received spam requests with various Accept-Language heades like:
secret.google.com
o-o-8-o-o.com search shell is much better than google!
Google officially recommends o-o-8-o-o.com search shell!
Vitaly rules google ☆*:。゜゚・*ヽ(^ᴗ^)ノ*・゜゚。:*☆ ¯\_(ツ)_/¯(ಠ益ಠ)(ಥ‿ಥ)(ʘ‿ʘ)ლ(ಠ_ಠლ)( ͡° ͜ʖ ͡°)ヽ(゚Д゚)ノʕ•̫͡•ʔᶘ ᵒᴥᵒᶅ(=^ ^=)oO
This implementation can optional check against a supported list of valid Locale. Without this check a simple request with "test" or (2, 3, 4) still bypass the syntax-only validation of LanguageRange.parse(String).
It optional allows empty and null values to allow search engine crawler.
Servlet Filter
final String headerAcceptLanguage = request.getHeader("Accept-Language");
// check valid
if (!HttpHeaderUtils.isHeaderAcceptLanguageValid(headerAcceptLanguage, true, Locale.getAvailableLocales()))
return;
Utility
/**
* Checks if the given accept-language request header can be parsed.<br>
* <br>
* Optional the parsed LanguageRange's can be checked against the provided
* <code>locales</code> so that at least one locale must match.
*
* #see LanguageRange#parse(String)
*
* #param acceptLanguage
* #param isBlankValid Set to <code>true</code> if blank values are also
* valid
* #param locales Optional collection of valid Locale to validate any
* against.
*
* #return <code>true</code> if it can be parsed
*/
public static boolean isHeaderAcceptLanguageValid(final String acceptLanguage, final boolean isBlankValid,
final Locale[] locales)
{
// allow null or empty
if (StringUtils.isBlank(acceptLanguage))
return isBlankValid;
try
{
// check syntax
final List<LanguageRange> languageRanges = Locale.LanguageRange.parse(acceptLanguage);
// wrong syntax
if (languageRanges.isEmpty())
return false;
// no valid locale's to check against
if (ArrayUtils.isEmpty(locales))
return true;
// check if any valid locale exists
for (final LanguageRange languageRange : languageRanges)
{
final Locale locale = Locale.forLanguageTag(languageRange.getRange());
// validate available locale
if (ArrayUtils.contains(locales, locale))
return true;
}
return false;
}
catch (final Exception e)
{
return false;
}
}

How to normalize a URL in Java?

URL normalization (or URL canonicalization) is the process by which URLs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URL into a normalized or canonical URL so it is possible to determine if two syntactically different URLs are equivalent.
Strategies include adding trailing slashes, https => http, etc. The Wikipedia page lists many.
Got a favorite method of doing this in Java? Perhaps a library (Nutch?), but I'm open. Smaller and fewer dependencies is better.
I'll handcode something for now and keep an eye on this question.
EDIT: I want to aggressively normalize to count URLs as the same if they refer to the same content. For example, I ignore the parameters utm_source, utm_medium, utm_campaign. For example, I ignore subdomain if the title is the same.
Have you taken a look at the URI class?
http://docs.oracle.com/javase/7/docs/api/java/net/URI.html#normalize()
I found this question last night, but there wasn't an answer I was looking for so I made my own. Here it is incase somebody in the future wants it:
/**
* - Covert the scheme and host to lowercase (done by java.net.URL)
* - Normalize the path (done by java.net.URI)
* - Add the port number.
* - Remove the fragment (the part after the #).
* - Remove trailing slash.
* - Sort the query string params.
* - Remove some query string params like "utm_*" and "*session*".
*/
public class NormalizeURL
{
public static String normalize(final String taintedURL) throws MalformedURLException
{
final URL url;
try
{
url = new URI(taintedURL).normalize().toURL();
}
catch (URISyntaxException e) {
throw new MalformedURLException(e.getMessage());
}
final String path = url.getPath().replace("/$", "");
final SortedMap<String, String> params = createParameterMap(url.getQuery());
final int port = url.getPort();
final String queryString;
if (params != null)
{
// Some params are only relevant for user tracking, so remove the most commons ones.
for (Iterator<String> i = params.keySet().iterator(); i.hasNext();)
{
final String key = i.next();
if (key.startsWith("utm_") || key.contains("session"))
{
i.remove();
}
}
queryString = "?" + canonicalize(params);
}
else
{
queryString = "";
}
return url.getProtocol() + "://" + url.getHost()
+ (port != -1 && port != 80 ? ":" + port : "")
+ path + queryString;
}
/**
* Takes a query string, separates the constituent name-value pairs, and
* stores them in a SortedMap ordered by lexicographical order.
* #return Null if there is no query string.
*/
private static SortedMap<String, String> createParameterMap(final String queryString)
{
if (queryString == null || queryString.isEmpty())
{
return null;
}
final String[] pairs = queryString.split("&");
final Map<String, String> params = new HashMap<String, String>(pairs.length);
for (final String pair : pairs)
{
if (pair.length() < 1)
{
continue;
}
String[] tokens = pair.split("=", 2);
for (int j = 0; j < tokens.length; j++)
{
try
{
tokens[j] = URLDecoder.decode(tokens[j], "UTF-8");
}
catch (UnsupportedEncodingException ex)
{
ex.printStackTrace();
}
}
switch (tokens.length)
{
case 1:
{
if (pair.charAt(0) == '=')
{
params.put("", tokens[0]);
}
else
{
params.put(tokens[0], "");
}
break;
}
case 2:
{
params.put(tokens[0], tokens[1]);
break;
}
}
}
return new TreeMap<String, String>(params);
}
/**
* Canonicalize the query string.
*
* #param sortedParamMap Parameter name-value pairs in lexicographical order.
* #return Canonical form of query string.
*/
private static String canonicalize(final SortedMap<String, String> sortedParamMap)
{
if (sortedParamMap == null || sortedParamMap.isEmpty())
{
return "";
}
final StringBuffer sb = new StringBuffer(350);
final Iterator<Map.Entry<String, String>> iter = sortedParamMap.entrySet().iterator();
while (iter.hasNext())
{
final Map.Entry<String, String> pair = iter.next();
sb.append(percentEncodeRfc3986(pair.getKey()));
sb.append('=');
sb.append(percentEncodeRfc3986(pair.getValue()));
if (iter.hasNext())
{
sb.append('&');
}
}
return sb.toString();
}
/**
* Percent-encode values according the RFC 3986. The built-in Java URLEncoder does not encode
* according to the RFC, so we make the extra replacements.
*
* #param string Decoded string.
* #return Encoded string per RFC 3986.
*/
private static String percentEncodeRfc3986(final String string)
{
try
{
return URLEncoder.encode(string, "UTF-8").replace("+", "%20").replace("*", "%2A").replace("%7E", "~");
}
catch (UnsupportedEncodingException e)
{
return string;
}
}
}
Because you also want to identify URLs which refer to the same content, I found this paper from the WWW2007 pretty interesting: Do Not Crawl in the DUST: Different URLs with Similar Text. It provides you with a nice theoretical approach.
No, there is nothing in the standard libraries to do this. Canonicalization includes things like decoding unnecessarily encoded characters, converting hostnames to lowercase, etc.
e.g. http://ACME.com/./foo%26bar becomes:
http://acme.com/foo&bar
URI's normalize() does not do this.
The RL library:
https://github.com/backchatio/rl
goes quite a ways beyond java.net.URL.normalize().
It's in Scala, but I imagine it should be useable from Java.
You can do this with the Restlet framework using Reference.normalize(). You should also be able to remove the elements you don't need quite conveniently with this class.
In Java, normalize parts of a URL
Example of a URL: https://i0.wp.com:55/lplresearch.com/wp-content/feb.png?ssl=1&myvar=2#myfragment
protocol: https
domain name: i0.wp.com
subdomain: i0
port: 55
path: /lplresearch.com/wp-content/uploads/2019/01/feb.png?ssl=1
query: ?ssl=1"
parameters: &myvar=2
fragment: #myfragment
Code to do the URL parsing:
import java.util.*;
import java.util.regex.*;
public class regex {
public static String getProtocol(String the_url){
Pattern p = Pattern.compile("^(http|https|smtp|ftp|file|pop)://.*");
Matcher m = p.matcher(the_url);
return m.group(1);
}
public static String getParameters(String the_url){
Pattern p = Pattern.compile(".*(\\?[-a-zA-Z0-9_.#!$&''()*+,;=]+)(#.*)*$");
Matcher m = p.matcher(the_url);
return m.group(1);
}
public static String getFragment(String the_url){
Pattern p = Pattern.compile(".*(#.*)$");
Matcher m = p.matcher(the_url);
return m.group(1);
}
public static void main(String[] args){
String the_url =
"https://i0.wp.com:55/lplresearch.com/" +
"wp-content/feb.png?ssl=1&myvar=2#myfragment";
System.out.println(getProtocol(the_url));
System.out.println(getFragment(the_url));
System.out.println(getParameters(the_url));
}
}
Prints
https
#myfragment
?ssl=1&myvar=2
You can then push and pull on the parts of the URL until they are up to muster.
Im have a simple way to solve it. Here is my code
public static String normalizeURL(String oldLink)
{
int pos=oldLink.indexOf("://");
String newLink="http"+oldLink.substring(pos);
return newLink;
}

Categories

Resources