Get ID lexemes in lexer class ANTLR3 that implemented to a jTable - java

I am building a java clone code detector in swing that implemented the ANTLR. This is the Screenshot :
https://www.dropbox.com/s/wnumgsjmpps33v5/SemogaYaAllah.png
if you see the screenshot, there are a main file that compared to another files. The way that I do is compared thats token main file to another file. The problem is, I am failed to get the ID Lexemes or tokens from my lexer class.
This is my ANTLR3JavaLexer
public class Antlr3JavaLexer extends Lexer {
public static final int PACKAGE=84;
public static final int EXPONENT=173;
public static final int STAR=49;
public static final int WHILE=103;
public static final int MOD=32;
public static final int MOD_ASSIGN=33;
public static final int CASE=58;
public static final int CHAR=60;
I ve created a JavaParser.class like this to use that lexer:
public final class JavaParser extends AParser { //Parser is my Abstract Class
JavaParser() {
}
#Override
protected boolean parseFile(JCCDFile f, final ASTManager treeContainer)throws ParseException, IOException {
BufferedReader in = new BufferedReader(new FileReader(f.getFile()));
String filePath = f.getNama(); // getName of file
final Antlr3JavaLexer lexer = new Antlr3JavaLexer();
lexer.preserveWhitespacesAndComments = false;
try {
lexer.setCharStream(new ANTLRReaderStream(in));
} catch (IOException e) {
e.printStackTrace();
return false;
}
//This is the problem
//When I am activated this code pieces, I get the output like this
https://www.dropbox.com/s/80uyva56mk1r5xy/Bismillah2.png
/*
StringBuilder sbu = new StringBuilder();
while (true) {
org.antlr.runtime.Token token = lexer.nextToken();
if (token.getType() == Antlr3JavaLexer.EOF) {
break;
}
sbu.append(token.getType());
System.out.println(token.getType() + ": :" + token.getText());
}*/
final CommonTokenStream tokens = new CommonTokenStream();
tokens.setTokenSource(lexer);
tokens.LT(10); // force load
// Create the parser
Antlr3JavaParser parser = new Antlr3JavaParser(tokens);
StringBuffer sb = new StringBuffer();
sb.append(tokens.toString());
DefaultTableModel model = (DefaultTableModel) Main_Menu.jTable2.getModel();
List<final_tugas_akhir.Report2> theListData = new ArrayList<Report2>();
final_tugas_akhir.Report2 theResult = new final_tugas_akhir.Report2();
theResult.setFile(filePath);
theResult.setId(sb.toString());
theResult.setNum(sbu.toString());
theListData.add(theResult);
for (Report2 report : theListData) {
System.out.println(report.getFile());
System.out.println(report.getId());
model.addRow(new Object[]{
report.getFile(),
report.getId(),
report.getNum(),
});
}
// in CompilationUnit
CommonTree tree;
try {
tree = (CommonTree) parser.compilationUnit().getTree();
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT(tree);
} catch (RecognitionException e) {
e.printStackTrace();
return false;
}
walkThroughChildren(tree, treeContainer, parser.getTokenStream()); //this is my method to check the similiar tokens
in.close();
this.posisiFix(treeContainer); //fix position
return true;
}
Once again, this is the error code my java program link: https://www.dropbox.com/s/80uyva56mk1r5xy/Bismillah2.png.
The tokens always give me a null value.

Related

Why isn't this multithreaded code faster?

This is my java code. Before, it calls BatchGenerateResult sequentially which is a lengthy process, but I want to try some multithreading and have each one of them run at the same time. However when I test it, the new time is the same as the old time. I expected the new time to be faster. Does anyone know whats wrong?
public class PlutoMake {
public static String classDir;
public static void main(String[] args) throws JSONException, IOException,
InterruptedException {
// determine path to the class file, I will use it as current directory
String classDirFile = PlutoMake.class.getResource("PlutoMake.class")
.getPath();
classDir = classDirFile.substring(0, classDirFile.lastIndexOf("/") + 1);
// get the input arguments
final String logoPath;
final String filename;
if (args.length < 2) {
logoPath = classDir + "tests/android.png";
filename = "result.png";
} else {
logoPath = args[0];
filename = args[1];
}
// make sure the logo image exists
File logofile = new File(logoPath);
if (!logofile.exists() || logofile.isDirectory()) {
System.exit(1);
}
// get the master.js file
String text = readFile(classDir + "master.js");
JSONArray files = new JSONArray(text);
ExecutorService es = Executors.newCachedThreadPool();
// loop through all active templates
int len = files.length();
for (int i = 0; i < len; i += 1) {
final JSONObject template = files.getJSONObject(i);
if (template.getBoolean("active")) {
es.execute(new Runnable() {
#Override
public void run() {
try {
BatchGenerateResult(logoPath, template.getString("template"),
template.getString("mapping"),
template.getString("metadata"), template.getString("result")
+ filename, template.getString("filter"),
template.getString("mask"), template.getInt("x"),
template.getInt("y"), template.getInt("w"),
template.getInt("h"));
} catch (IOException | JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
}
es.shutdown();
boolean finshed = es.awaitTermination(2, TimeUnit.MINUTES);
}
private static void BatchGenerateResult(String logoPath, String templatePath,
String mappingPath, String metadataPath, String resultPath,
String filter, String maskPath, int x, int y, int w, int h)
throws IOException, JSONException {
ColorFilter filterobj = null;
if (filter.equals("none")) {
filterobj = new NoFilter();
} else if (filter.equals("darken")) {
filterobj = new Darken();
} else if (filter.equals("vividlight")) {
filterobj = new VividLight();
} else {
System.exit(1);
}
String text = readFile(classDir + metadataPath);
JSONObject metadata = new JSONObject(text);
Map<Point, Point> mapping = MyJSON.ReadMapping(classDir + mappingPath);
BufferedImage warpedimage = Exporter.GenerateWarpedLogo(logoPath, maskPath,
mapping, metadata.getInt("width"), metadata.getInt("height"));
// ImageIO.write(warpedimage, "png", new FileOutputStream(classDir +
// "warpedlogo.png"));
Exporter.StampLogo(templatePath, resultPath, x, y, w, h, warpedimage,
filterobj);
warpedimage.flush();
}
private static String readFile(String path) throws IOException {
File file = new File(path);
FileInputStream fis = new FileInputStream(file);
byte[] data = new byte[(int) file.length()];
fis.read(data);
fis.close();
String text = new String(data, "UTF-8");
return text;
}
}
It looks like, for all practical purposes the following code should be the only one which can improve performance by using multithreading.
BufferedImage warpedimage = Exporter.GenerateWarpedLogo(logoPath, maskPath,
mapping, metadata.getInt("width"), metadata.getInt("height"));
// ImageIO.write(warpedimage, "png", new FileOutputStream(classDir +
// "warpedlogo.png"));
Exporter.StampLogo(templatePath, resultPath, x, y, w, h, warpedimage,
filterobj);
The rest of it major IO - I doubt how much performance improvement you can achieve there.
Do a profile and check how long each one of the methods is executing. Depending on that you should be able to understand.
Hi sorry not able add to comment part as just joined..
would suggest to first go for dummy method any check whether it works at your end then add your business logic...
if the sample works then you might need to check your "template" class
here's the sample.. check the timestamp
package example;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ExecutorStaticExample {
public static void main(String[] args){
ExecutorService ex = Executors.newCachedThreadPool();
for (int i=0;i<10;i++){
ex.execute(new Runnable(){
#Override
public void run() {
helloStatic();
System.out.println(System.currentTimeMillis());
}
});
}
}
static void helloStatic(){
System.out.println("hello form static");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

Dictionary example for uimaFIT code based

I'm having a look at uimaFIT and I just had quite some difficulties to add a Dictionary Annotator to a analyse engine.
This is my best shut so far:
public class LocationAnnotator extends JCasAnnotator_ImplBase {
public static final String RES_DICTIONARY = "dictionary";
#ExternalResource(key = RES_DICTIONARY)
private DataResource resource;
private Dictionary dictionary;
#Override
public void initialize(UimaContext context) throws ResourceInitializationException {
super.initialize(context);
try {
DictionaryBuilder dictBuilder = new HashMapDictionaryBuilder();
// create dictionary file parser
DictionaryFileParserImpl fileParser = new DictionaryFileParserImpl();
fileParser.parseDictionaryFile(resource.getUri().getPath(), resource.getInputStream(), dictBuilder);
dictionary = dictBuilder.getDictionary();
} catch (IOException e) {
throw new ResourceInitializationException();
}
}
#Override
public void process(JCas cas) throws AnalysisEngineProcessException {
String docText = cas.getDocumentText();
for (String line : docText.split("\n")) {
for (String word : line.split(" ")) {
if (dictionary.contains(word)) {
int pos = docText.indexOf(word);
Location annotation = new Location(cas, pos, pos + word.length());
annotation.addToIndexes();
}
}
}
}
}
I'm executing the engine like this:
CollectionReaderDescription reader = CollectionReaderFactory.createReaderDescription(CvReader.class, CvReader.PARAM_INPUT_FILE, "docs/simple-doc.txt");
AnalysisEngineDescription tokenizer = AnalysisEngineFactory.createEngineDescription(LocationAnnotator.class);
ExternalResourceFactory.bindResource(tokenizer, LocationAnnotator.RES_DICTIONARY, "META-INF/dictionaries/location.dict.xml");
for (JCas cas : SimplePipeline.iteratePipeline(reader, tokenizer)) {
for (Location location : JCasUtil.select(cas, Location.class)) {
System.out.println("Found location: " + location.getCoveredText());
}
}
Is there no more elegant way? Don't like the initialization. Would expect to init the dictionary with an annotation as the #ExternalResource.
I would be glade if someone could provide me with a more simple example.. Thanks!

reducing duplication of code in inheritance

The method reads the data for the First class and second Class using scanner and then it stores them in the ArrayList the tow class. First and Second are inherited From Main Class. The problem I have is the duplication I created to objects.
How can I only create one and use it for both.
import java.io.File;
import java.io.FileNotFoundException;
import java.util.*;
public class Auto {
private ArrayList<Main> lists;
public Auto() {
lists = new ArrayList<Main>();
}
public void storeData(Main main) {
list.add(main);
}
public void readFile(String filePath) throws FileNotFoundException {
File file = new File(filePath);
Scanner input = new Scanner(file);
String dataToBe;
while (input.hasNext()) {
String lines = input.nextLine().trim();
Scanner scanner = new Scanner(lines).useDelimiter("\n[ ]*,");
if (lines.startsWith("Data")) {
if (lines.startsWith("FirstData")) {
dataToBe = "first";
} else if (lines.startsWith("SecondData")) {
dataToBe = "second";
}
} else if (dataToBe.equals("first")) {
Main main = new First();
main.readData(scanner);
storeData(main);
} else if (dataToBe.equals("second")) {
Main main = new Second();
main.readData(scanner);
storeData(main);
}
}
}
}
Okay, you might think its longwinded, but it's probably how I would do it under your restrictions.
public void readFile(String filePath) throws FileNotFoundException {
final Pattern pattern = Pattern.compile("\n[ ]*,");
final Scanner fileInput = new Scanner(new File(filePath));
while (fileInput.hasNextLine()) {
final String line = fileInput.nextLine().trim();
final Matcher matcher = pattern.matcher(line);
final StringBuilder builder = new StringBuilder();
byte flag = 0;
while (matcher.find()) {
final String match = matcher.group();
if(match.startsWith("FirstData")){ flag = 1;}
else if(match.startsWith("SecondData")){flag = 2;}
builder.append(line).append(",");
}
Main mainObj = (flag == 1) ? (new First()) : (flag == 2) ? (new Second()) : null;
if(null != mainObj){
mainObj.readData(builder.toString());
}
}
}
The approach above does require you to accept a String instead of a Scanner in the parameter, but the CSV format passed to each method lets the behaviour of each class handle the work.

A Producer-Consumer implemented using java threads writes only half the data to file

Hello I have a problem wherein I have to read a huge csv file. remove first field from it, then store only unique values to a file. I have written a program using threads which implements producer-consumer pattern.
Class CSVLineStripper does what the name suggests. Takes a line out of csv, removes first field from every line and adds it to a queue. CSVLineProcessor then takes that field stores all one by one in an arraylist and checks if fields are unique so only uniques are stored. Arraylist is only used for reference. every unique field is written to a file.
Now what is happening is that all fields are stripped correctly. I run about 3000 lines it's all correct. When I start the program for all lines, which are around 7,00,000 + lines, i get incomplete records, about 1000 unique are not taken. Every field is enclosed in double-quotes. What is weird is that the last field in the file that is generated is an incomplete word and ending double quote is missing. Why is this happening?
import java.util.*;
import java.io.*;
class CSVData
{
Queue <String> refererHosts = new LinkedList <String> ();
Queue <String> uniqueReferers = new LinkedList <String> (); // final writable queue of unique referers
private int finished = 0;
private int safety = 100;
private String line = "";
public CSVData(){}
public synchronized String getCSVLine() throws InterruptedException{
int i = 0;
while(refererHosts.isEmpty()){
if(i < safety){
wait(10);
}else{
return null;
}
i++;
}
finished = 0;
line = refererHosts.poll();
return line;
}
public synchronized void putCSVLine(String CSVLine){
if(finished == 0){
refererHosts.add(CSVLine);
this.notifyAll();
}
}
}
class CSVLineStripper implements Runnable //Producer
{
private CSVData cd;
private BufferedReader csv;
public CSVLineStripper(CSVData cd, BufferedReader csv){ // CONSTRUCTOR
this.cd = cd;
this.csv = csv;
}
public void run() {
System.out.println("Producer running");
String line = "";
String referer = "";
String [] CSVLineFields;
int limit = 700000;
int lineCount = 1;
try {
while((line = csv.readLine()) != null){
CSVLineFields = line.split(",");
referer = CSVLineFields[0];
cd.putCSVLine(referer);
lineCount++;
if(lineCount >= limit){
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("<<<<<< PRODUCER FINISHED >>>>>>>");
}
private String printString(String [] str){
String string = "";
for(String s: str){
string = string + " "+s;
}
return string;
}
}
class CSVLineProcessor implements Runnable
{
private CSVData cd;
private FileWriter fw = null;
private BufferedWriter bw = null;
public CSVLineProcessor(CSVData cd, BufferedReader bufferedReader){ // CONSTRUCTOR
this.cd = cd;
try {
this.fw = new FileWriter("unique_referer_dump.txt");
} catch (IOException e) {
e.printStackTrace();
}
this.bw = new BufferedWriter(fw);
}
public void run() {
System.out.println("Consumer Started");
String CSVLine = "";
int safety = 10000;
ArrayList <String> list = new ArrayList <String> ();
while(CSVLine != null || safety <= 10000){
try {
CSVLine = cd.getCSVLine();
if(!list.contains(CSVLine)){
list.add(CSVLine);
this.CSVDataWriter(CSVLine);
}
} catch (Exception e) {
e.printStackTrace();
}
if(CSVLine == null){
break;
}else{
safety++;
}
}
System.out.println("<<<<<< CONSUMER FINISHED >>>>>>>");
System.out.println("Unique referers found in 30000 records "+list.size());
}
private void CSVDataWriter(String referer){
try {
bw.write(referer+"\n");
} catch (Exception e) {
e.printStackTrace();
}
}
}
public class RefererCheck2
{
public static void main(String [] args) throws InterruptedException
{
String pathToCSV = "/home/shantanu/DEV_DOCS/Contextual_Work/excite_domain_kw_site_wise_click_rev2.csv";
CSVResourceHandler csvResHandler = new CSVResourceHandler(pathToCSV);
CSVData cd = new CSVData();
CSVLineProcessor consumer = new CSVLineProcessor(cd, csvResHandler.getCSVFileHandler());
CSVLineStripper producer = new CSVLineStripper(cd, csvResHandler.getCSVFileHandler());
Thread consumerThread = new Thread(consumer);
Thread producerThread = new Thread(producer);
producerThread.start();
consumerThread.start();
}
}
This is how a sample input is:
"xyz.abc.com","4432"."clothing and gifts","true"
"pqr.stu.com","9537"."science and culture","false"
"0.stu.com","542331"."education, studies","false"
"m.dash.com","677665"."technology, gadgets","false"
Producer stores in queue:
"xyz.abc.com"
"pqr.stu.com"
"0.stu.com"
"m.dash.com"
Consumer stores uniques in the file, but after opening file contents one would see
"xyz.abc.com"
"pqr.stu.com"
"0.st
Couple things, you are breaking after 700k, not 7m, also you are not flushing your buffered writer, so the last stuff you could be incomplete, add flush at end and close all your resources. Debugger is a good idea :)

using dbpedia spotlight in java or scala

Does anyone know where to find a little how to on using dbpedia spotlight in java or scala? Or could anyone explain how it's done? I can't find any information on this...
The DBpedia Spotlight wiki pages would be a good place to start.
And I believe the installation page has listed the most popular ways (using a jar, or set up a web service) to use the application.
It includes instructions on using the Java/Scala API with your own installation, or calling the Web Service.
There are some additional data needed to be downloaded to run your own server for full service, good time to make a coffee for yourself.
you need download dbpedia spotlight (jar file) after that u can use next two classes ( author pablomendes ) i only make some change .
public class db extends AnnotationClient {
//private final static String API_URL = "http://jodaiber.dyndns.org:2222/";
private static String API_URL = "http://spotlight.dbpedia.org:80/";
private static double CONFIDENCE = 0.0;
private static int SUPPORT = 0;
private static String powered_by ="non";
private static String spotter ="CoOccurrenceBasedSelector";//"LingPipeSpotter"=Annotate all spots
//AtLeastOneNounSelector"=No verbs and adjs.
//"CoOccurrenceBasedSelector" =No 'common words'
//"NESpotter"=Only Per.,Org.,Loc.
private static String disambiguator ="Default";//Default ;Occurrences=Occurrence-centric;Document=Document-centric
private static String showScores ="yes";
#SuppressWarnings("static-access")
public void configiration(double CONFIDENCE,int SUPPORT,
String powered_by,String spotter,String disambiguator,String showScores){
this.CONFIDENCE=CONFIDENCE;
this.SUPPORT=SUPPORT;
this.powered_by=powered_by;
this.spotter=spotter;
this.disambiguator=disambiguator;
this.showScores=showScores;
}
public List<DBpediaResource> extract(Text text) throws AnnotationException {
LOG.info("Querying API.");
String spotlightResponse;
try {
String Query=API_URL + "rest/annotate/?" +
"confidence=" + CONFIDENCE
+ "&support=" + SUPPORT
+ "&spotter=" + spotter
+ "&disambiguator=" + disambiguator
+ "&showScores=" + showScores
+ "&powered_by=" + powered_by
+ "&text=" + URLEncoder.encode(text.text(), "utf-8");
LOG.info(Query);
GetMethod getMethod = new GetMethod(Query);
getMethod.addRequestHeader(new Header("Accept", "application/json"));
spotlightResponse = request(getMethod);
} catch (UnsupportedEncodingException e) {
throw new AnnotationException("Could not encode text.", e);
}
assert spotlightResponse != null;
JSONObject resultJSON = null;
JSONArray entities = null;
try {
resultJSON = new JSONObject(spotlightResponse);
entities = resultJSON.getJSONArray("Resources");
} catch (JSONException e) {
//throw new AnnotationException("Received invalid response from DBpedia Spotlight API.");
}
LinkedList<DBpediaResource> resources = new LinkedList<DBpediaResource>();
if(entities!=null)
for(int i = 0; i < entities.length(); i++) {
try {
JSONObject entity = entities.getJSONObject(i);
resources.add(
new DBpediaResource(entity.getString("#URI"),
Integer.parseInt(entity.getString("#support"))));
} catch (JSONException e) {
LOG.error("JSON exception "+e);
}
}
return resources;
}
}
second class
/**
* #author pablomendes
*/
public abstract class AnnotationClient {
public Logger LOG = Logger.getLogger(this.getClass());
private List<String> RES = new ArrayList<String>();
// Create an instance of HttpClient.
private static HttpClient client = new HttpClient();
public List<String> getResu(){
return RES;
}
public String request(HttpMethod method) throws AnnotationException {
String response = null;
// Provide custom retry handler is necessary
method.getParams().setParameter(HttpMethodParams.RETRY_HANDLER,
new DefaultHttpMethodRetryHandler(3, false));
try {
// Execute the method.
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
LOG.error("Method failed: " + method.getStatusLine());
}
// Read the response body.
byte[] responseBody = method.getResponseBody(); //TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
response = new String(responseBody);
} catch (HttpException e) {
LOG.error("Fatal protocol violation: " + e.getMessage());
throw new AnnotationException("Protocol error executing HTTP request.",e);
} catch (IOException e) {
LOG.error("Fatal transport error: " + e.getMessage());
LOG.error(method.getQueryString());
throw new AnnotationException("Transport error executing HTTP request.",e);
} finally {
// Release the connection.
method.releaseConnection();
}
return response;
}
protected static String readFileAsString(String filePath) throws java.io.IOException{
return readFileAsString(new File(filePath));
}
protected static String readFileAsString(File file) throws IOException {
byte[] buffer = new byte[(int) file.length()];
#SuppressWarnings("resource")
BufferedInputStream f = new BufferedInputStream(new FileInputStream(file));
f.read(buffer);
return new String(buffer);
}
static abstract class LineParser {
public abstract String parse(String s) throws ParseException;
static class ManualDatasetLineParser extends LineParser {
public String parse(String s) throws ParseException {
return s.trim();
}
}
static class OccTSVLineParser extends LineParser {
public String parse(String s) throws ParseException {
String result = s;
try {
result = s.trim().split("\t")[3];
} catch (ArrayIndexOutOfBoundsException e) {
throw new ParseException(e.getMessage(), 3);
}
return result;
}
}
}
public void saveExtractedEntitiesSet(String Question, LineParser parser, int restartFrom) throws Exception {
String text = Question;
int i=0;
//int correct =0 ; int error = 0;int sum = 0;
for (String snippet: text.split("\n")) {
String s = parser.parse(snippet);
if (s!= null && !s.equals("")) {
i++;
if (i<restartFrom) continue;
List<DBpediaResource> entities = new ArrayList<DBpediaResource>();
try {
entities = extract(new Text(snippet.replaceAll("\\s+"," ")));
System.out.println(entities.get(0).getFullUri());
} catch (AnnotationException e) {
// error++;
LOG.error(e);
e.printStackTrace();
}
for (DBpediaResource e: entities) {
RES.add(e.uri());
}
}
}
}
public abstract List<DBpediaResource> extract(Text text) throws AnnotationException;
public void evaluate(String Question) throws Exception {
evaluateManual(Question,0);
}
public void evaluateManual(String Question, int restartFrom) throws Exception {
saveExtractedEntitiesSet(Question,new LineParser.ManualDatasetLineParser(), restartFrom);
}
}
main()
public static void main(String[] args) throws Exception {
String Question ="Is the Amazon river longer than the Nile River?";
db c = new db ();
c.configiration(0.0, 0, "non", "CoOccurrenceBasedSelector", "Default", "yes");
System.out.println("resource : "+c.getResu());
}
I just add one little fix for your answer.
Your code is running, if you add the evaluate method call:
public static void main(String[] args) throws Exception {
String question = "Is the Amazon river longer than the Nile River?";
db c = new db ();
c.configiration(0.0, 0, "non", "CoOccurrenceBasedSelector", "Default", "yes");
c.evaluate(question);
System.out.println("resource : "+c.getResu());
}
Lamine
In the request method of the second class (AnnotationClient) in Adel's answer, the author Pablo Mendes hasn't finished
TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
which is an annoying warning that needs to be removed by replacing
byte[] responseBody = method.getResponseBody(); //TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
response = new String(responseBody);
with
Reader in = new InputStreamReader(method.getResponseBodyAsStream(), "UTF-8");
StringWriter writer = new StringWriter();
org.apache.commons.io.IOUtils.copy(in, writer);
response = writer.toString();

Categories

Resources