Unit Tests for spring boot application - java

I have never used JUnit testing before.I need to test my code with JUnit.
I have been searching google for all day but the problem is that I found examples using Mockito but in my code I didn't use dependency injections(#Autowired).
How can i use it for these methods?
Thanks in advance.
public class WordService {
public WordService() {
}
public static String upperCaseFirst(String value) {
char[] listChar = value.toCharArray();
listChar[0] = Character.toUpperCase(listChar[0]);
return new String(listChar);
}
/**
* Find and return the search word
* #param name
* #return the word sought or null if not found
*/
public Word findWordByName(String name){
String nameUpper = upperCaseFirst(name);
WordDao w = new WordDao();
Word found = w.findWord(nameUpper);
List<String> definitions = new ArrayList<>();
if(found != null) {
for(int i=0; i<found.getDefinition().size(); i++) {
StringBuffer defBuffer = new StringBuffer();
String definitionFound = found.getDefinition().get(i);
definitionFound = definitionFound.replace("\n", "");
defBuffer.append(definitionFound);
defBuffer.append("_");
definitions.add(i, defBuffer.toString());
}
found.setDefinition(definitions);
}
return found;
}
/**
*
* #return Return a list of words
*/
public List<Word> findAllWord(){
WordDao w = new WordDao();
return w.findAllWords();
}
}

You can extract WordDao to class level as a field. Create set method.
After that in unit test you can mock WordDao and control what will be result of methods call. For the second method it something like:
WordDao wMocked = Mock(WordDao.class)
Word word1 = new...
Word word2 = new...
List<Word> words = List.of(word1, word2);
when(w.findAllWords()).thenReturn(words);
WordService ws = new WordService();
ws.setWordDao(wMocked);
Assert.equals(words, ws.findAllWords);

Related

Compare two strings like url in android

I have two strings:
http://porter.com/request/.*
and
http://porter.com/request/tokenId
I want to check if first parts: http://porter.com/request are the same in both and check if tokenId is not null, because in some cases it could be only http://porter.com/request/.
I use something like that:
override fun validate(pair: Pair<URI, URI>): Boolean {
val uri = pair.first.path.split("/").dropLast(1).filter { it.isNotBlank() }.joinToString("")
val uriIntent = pair.second.path.split("/").dropLast(1).filter { it.isNotBlank() }.joinToString("")
val asd = pair.second.path.split("/").filter { it.isNotBlank() }.last().isNotBlank()
return uri == uriIntent && asd
}
but this is not working for last case: http://porter.com/request/
Any ideas?
final String regex = "(http://porter.com/request/).+";
/**
* Below code will return false
* since, URL doesn't have last path
*/
final String yourUrl = "http://porter.com/request/.*";
final boolean valid = yourUrl.matches(regex)
/**
* Same (will return false), as ex. above
*/
final String yourUrl = "http://porter.com/request/*";
final boolean valid = yourUrl.matches(regex)
/**
* This will return true. Link is Ok.
*/
final String yourUrl = "http://porter.com/request/tokenId";
final boolean valid = yourUrl.matches(regex)

Given a span of string like [0..2) how to find string equivalent?

I am using apache open nlp toolkit in java.I wish to display only name enitites in a given text like geo-graphical, person etc.. Following code snippet gives string spans
try {
System.out.println("Input : Pierre Vinken is 61 years old");
InputStream modelIn = new FileInputStream("en-ner-person.bin");
TokenNameFinderModel model = new TokenNameFinderModel(modelIn);
NameFinderME nameFinder = new NameFinderME(model);
String[] sentence = new String[]{
"Pierre",
"Vinken",
"is",
"61",
"years",
"old",
"."
};
Span nameSpans[] = nameFinder.find(sentence);
for(Span s: nameSpans)
System.out.println("Name Entity : "+s.toString());
}
catch (IOException e) {
e.printStackTrace();
}
Output :
Input : Pierre Vinken is 61 years old
Name Entity : [0..2) person
How can i get the equivalent string rather than span, is there any api for that?
Span has the method getCoveredText(CharSequence text) which will do this. But I don't understand why you need an API method to get the text corresponding to a span. A span clearly provides start (inclusive) and end (exclusive) integer offsets. So the following suffices:
StringBuilder builder = new StringBuilder();
for (int i = s.getStart(); i < s.getEnd(); i++) {
builder.append(sentences[i]).append(" ");
}
String name = builder.toString();
You can use the Span class itself.
The following class method returns the CharSequence that correspond to the Span instance from another CharSequence text:
/**
* Retrieves the string covered by the current span of the specified text.
*
* #param text
*
* #return the substring covered by the current span
*/
public CharSequence getCoveredText(CharSequence text) { ... }
Notice that this class also has two static methods that accept an array of Span and respectively a CharSequence or an array of tokens (String[]) to return the equivalent array of String.
/**
* Converts an array of {#link Span}s to an array of {#link String}s.
*
* #param spans
* #param s
* #return the strings
*/
public static String[] spansToStrings(Span[] spans, CharSequence s) {
String[] tokens = new String[spans.length];
for (int si = 0, sl = spans.length; si < sl; si++) {
tokens[si] = spans[si].getCoveredText(s).toString();
}
return tokens;
}
public static String[] spansToStrings(Span[] spans, String[] tokens) { ... }
I hope it helps...

Lucene wrong match

I have a csvfile
id|name
1|PC
2|Activation
3|USB
public class TESTResult
{
private Long id;
private String name;
private Float score;
// with setters & getters
}
public class TEST
{
private Long id;
private String name;
// with setters & getters
}
public class JobTESTTagger {
private static Version VERSION;
private static CharArraySet STOPWORDS;
private static RewriteMethod REWRITEMETHOD;
private static Float MINSCORE = 0.0001F;
static {
BooleanQuery.setMaxClauseCount(100000);
VERSION = Version.LUCENE_44;
STOPWORDS = StopAnalyzer.ENGLISH_STOP_WORDS_SET;
REWRITEMETHOD = MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE;
}
public static ArrayList<TESTResult> searchText(String text, String keyId,
List<TEST> TESTs) {
ArrayList<TESTResult> results = new ArrayList<TESTResult>();
MemoryIndex index = new MemoryIndex();
EnglishAnalyzer englishAnalyzer = new EnglishAnalyzer(VERSION,STOPWORDS);
QueryParser parser = new QueryParser(VERSION, "text", englishAnalyzer);
parser.setMultiTermRewriteMethod(REWRITEMETHOD);
index.addField("text", text, englishAnalyzer);
for (int i = 0; i < TESTs.size(); i++) {
TEST TEST = TESTs.get(i);
String criteria = "\"" + TEST.getName().trim() + "\"";
if (criteria == null || criteria.isEmpty())
continue;
criteria = criteria.replaceAll("\r", " ");
criteria = criteria.replaceAll("\n", " ");
try {
Query query = parser.parse(criteria);
Float score = index.search(query);
if (score > MINSCORE) {
int result = new TESTResult(TEST.getId(), TEST.getName(),score);
results.add(result);
}
} catch (ParseException e) {
System.out.println("Could not parse article.");
}
}
return results;
}
public static void main(String[] args) {
ArrayList<TESTResult> testresults = searchText(text, keyId, iths);
CsvReader reader = new CsvReader("C:\a.csv");
reader.setDelimiter('|');
reader.readHeaders();
List<TEST> result = new ArrayList<TEST>();
while (reader.readRecord()) {
Long id = Long.valueOf(reader.get("id").trim());
String name = reader.get("name").trim();
TEST concept = new TEST(id, name);
result.add(concept);
}
String text = "These activities are good. I have a good PC in my house.";
}
I am matching 'activities' to Activation. How is it possible. Can anybody tell me how Lucene matches the words.
Thanks
R
EnglishAnalyzer, along with most language-specific analyzers, uses a stemmer. This means that it reduces terms to a stem (or root) of the term, in order to attempt to match more loosely. Mostly this works well, removing suffixes and matching up derived words to a common root. So when I search for "fish", I also find "fished", "fishing" and "fishes".
In this case though, both "activities" and "activation" both reduce to the root of "activ", resulting in the match you are seeing. Another example: "organ", "organic" and "organize" all have the common stem "organ".
You can stem or not, neither approach is perfect. If you don't stem you'll miss relevant results. If you do, you'll hit some odd irrelevant results.
To deal with specific problematic cases, you can define a stemmer exclusion set in EnglishAnalyzer to prevent stemming just on those specific problematic terms. In this case, I would think of "activation" as the probable term to prevent stemming on, though you could go either way. So I could do something like:
CharArraySet stemExclusionSet = new CharArraySet(VERSION, 1, true);
stemExclusionSet.add("activation");
EnglishAnalyzer englishAnalyzer = new EnglishAnalyzer(VERSION, STOPWORDS, stemExclusionSet);

Get string between an symbol

I am extracting a youtube video id from a youtube link. the list looks like this
http://www.youtube.com/watch?v=mmmc&feature=plcp
I want to get the mmmc only.
i used .replaceAll ?
Three ways:
Url parsing:
http://download.oracle.com/javase/6/docs/api/java/net/URL.html
URL url = new URL("http://www.youtube.com/watch?v=mmmc&feature=plcp");
url.getQuery(); // return query string.
Regular Expression
Examples here http://www.vogella.com/articles/JavaRegularExpressions/article.html
Tokenize
String s = "http://www.youtube.com/watch?v=mmmc&feature=plcp";
String arr[] = s.split("=");
String arr1[] = arr[1].split("&");
System.out.println(arr1[0]);
If you'd like to use regular expressions, this could be a solution:
Pattern p = Pattern
.compile("http://www.youtube.com/watch\\?v=([\\s\\S]*?)\\&feature=plcp");
Matcher m = p.matcher(youtubeLink);
if (m.find()) {
return m.group(1);
}
else{
throw new IllegalArgumentException("invalid youtube link");
}
Of course, this will only work if the feature will always be plcp, if not, you could simply remove that part or replace it with a wilcard as I did with mmmc
Edit: now i know what you are looking for i hope:
String url= "http://www.youtube.com/watch?v=mmmc&feature=plcp";
String search = "v=";
int index = url.indexOf(search);
int index2 = url.indexOf("&",index);
String found = url.substring(index+2,index2);
System.out.println(found);
Here's a generic solution (using Guava MapSplitter):
public final class UrlUtil {
/**
* Query string splitter.
*/
private static final MapSplitter PARAMS_SPLITTER = Splitter.on('&').withKeyValueSeparator("=");
/**
* Get param value in provided url for provided param.
*
* #param url Url to use
* #param param Param to use
* #return param value or null.
*/
public static String getParamVal(String url, String param)
{
if (url.contains("?")) {
final String query = url.substring(url.indexOf('?') + 1);
return PARAMS_SPLITTER.split(query).get(param);
}
return null;
}
public static void main(final String[] args)
{
final String url = "http://www.youtube.com/watch?v=mmmc&feature=plcp";
System.out.println(getParamVal(url, "v"));
System.out.println(getParamVal(url, "feature"));
}
}
Outputs:
mmmc
plcp

Working with EnumSet

This code works, but with a try/catch box .
public enum AccentuationUTF8 {/** */
é, /** */è, /** */ç, /** */à, /** */ù, /** */
ä, /** */ë, /** */ö, /** */ï, /** */ü, /** */
â, /** */ê, /** */î, /** */ô, /** */û, /** */
}
......
final EnumSet<AccentuationUTF8> esUtf8 = EnumSet.noneOf(AccentuationUTF8.class);
final String[] acc1 = {"é", "à", "u"};
for (final String string : acc1) {
try { // The ontologic problem
esUtf8.add(AccentuationUTF8.valueOf(string));
} catch (final Exception e) {
System.out.println(string + " not an accent.");
}
}
System.out.println(esUtf8.size() + "\t" + esUtf8.toString()
output :
u not an accent.
2 [é, à]
I want to generate an EnumSet with all accents of a word or of sentence.
Edit after comments
Is it possible to manage such an EnumSet without using try (needed by AccentuationUTF8.valueOf(string)?
Is better way to code ?
FINAL EDIT
Your responses suggest me a good solution : because EnumSet.contains(Object), throw an Exception, change it : create a temporary HashSet able to return a null without Exception.
So the ugly try/catch is now removed, code is now :
final Set<String> setTmp = new HashSet<>(AccentsUTF8.values().length);
for (final AccentsUTF8 object : AccentsUTF8.values()) {
setTmp.add(object.toString());
}
final EnumSet<AccentsUTF8> esUtf8 = EnumSet.noneOf(AccentsUTF8.class);
final String[] acc1 = {"é", "à", "u"};
for (final String string : acc1) {
if (setTmp.contains(string)) {
esUtf8.add(AccentsUTF8.valueOf(string));
} else {
System.out.println(string + " not an accent.");
}
}
System.out.println(esUtf8.size() + "\t" + esUtf8.toString()
Thanks for attention you paid for.
I don't think an enum is the best approach here - partly because it's only going to work for valid Java identifiers.
It looks like what you really want is just a Set<Character>, with something like:
Set<Character> accentsInText = new HashSet<Character>();
for (int i = 0; i < text.length(); i++) {
Character c = text.charAt(i);
if (ALL_ACCENTS.contains(c)) {
accentsInText.add(c);
}
}

Categories

Resources