meaning of #(#) characters - java

In documentation code I see some things like this:
/*
* #(#)File.java 1.142 09/04/01
what does characters like #(#) meaning?

#(#) is the character string used by the Unix what command to filter strings from binaries to list the components that were used to build that binary. For instance what java on AIX yields:
java:
23 1.4 src/bos/usr/ccs/lib/libpthreads/init.c, libpth, bos520 8/19/99 12:20:14
61 1.14 src/bos/usr/ccs/lib/libc/__threads_init.c, libcthrd, bos520 7/11/00 12:04:14
src/tools/sov/java.c, tool, asdev, 20081128 1.83.1.36
src/misc/sov/copyrght.c, core, asdev, 20081128 1.8
while `strings java | grep '#(#)' yields:
#(#)23 1.4 src/bos/usr/ccs/lib/libpthreads/init.c, libpth, bos520 8/19/99 12:20:14
#(#)61 1.14 src/bos/usr/ccs/lib/libc/__threads_init.c, libcthrd, bos520 7/11/00 12:04:14
#(#)src/tools/sov/java.c, tool, asdev, 20081128 1.83.1.36
#(#)src/misc/sov/copyrght.c, core, asdev, 20081128 1.8
#(#) was chosen as marker because it would not occur elsewhere, source code controls systems typically add a line containing this marker and the description of the file version on synchronisation, expanding keywords with values reflecting the file contents.
For instance, the comment you list would be the result of expanding the SCCS keywords %Z% %M% %R%.%L% %E% where the %Z% translates into #(#).

From (hazy) memory, that was the tag used by SCCS back in the "good old days". Given that (to my knowledge), BitKeeper uses SCCS underneath, it could be BitKeeper.

It is usually something that is added automatically by the version control system.

That construct has no special meaning in Java. It is just some text in a comment.
It looks like something that's inserted by a version control system.

Related

"ValueError: Incompatible Language version 13. Must not be between 9 and 12" with Google Colab

I am trying to build a deep learning model with transformer model architecture. In that case when I am trying to cleaning the dataset following error occurred.
I am using Pytorch and google colab for that case & trying to clean Java methods and comment dataset.
Tested Code
import re
from fast_trees.core import FastParser
parser = FastParser('java')
def get_cmt_params(cmt: str) -> List[str]:
'''
Grabs the parameter identifier names from a JavaDoc comment
:param cmt: the comment to extract the parameter identifier names from
:returns: an array of the parameter identifier names found in the given comment
'''
params = re.findall('#param+\s+\w+', cmt)
param_names = []
for param in params:
param_names.append(param.split()[1])
return param_name
Occured Error
Downloading repo https://github.com/tree-sitter/tree-sitter-java to /usr/local/lib/python3.7/dist-packages/fast_trees/tree-sitter-java.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-31-64f6fa6ed39b> in <module>()
3 from fast_trees.core import FastParser
4
----> 5 parser.set_language = FastParser('java')
6
7 def get_cmt_params(cmt: str) -> List[str]:
3 frames
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in FastParser(lang)
96 }
97
---> 98 return PARSERS[lang]()
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in __init__(self)
46
47 def __init__(self):
---> 48 super().__init__()
49
50 def get_method_parameters(self, mthd: str) -> List[str]:
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in __init__(self)
15 class BaseParser:
16 def __init__(self):
---> 17 self.build_parser()
18
19 def build_parser(self):
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in build_parser(self)
35 self.language = Language(build_dir, self.LANG)
36 self.parser = Parser()
---> 37 self.parser.set_language(self.language)
38
39 # Cell
ValueError: Incompatible Language version 13. Must not be between 9 and 12
an anybody help me to solve this issue?
fast_trees uses tree_sitter and according to tree_sitter repo it is an incomatibility issue. If you know the owner of fast_trees ask them to upgrade their tree_sitter version.
Or you can fork it and upgrade it yourself, but keep in mind it may not be backwards compatible if you take it upon yourself and it may not be just a simple new version install.
The fast-trees library uses the tree-sitter library and since they recommended using the 0.2.0 version of tree-sitter in order to use fast-trees. Although downgrade the tree-sitter to the 0.2.0 version will not be resolved your problem. I also tried out it by downgrading it.
So, without investing time to figure out the bug in tree-sitter it is better to move to another stable library that satisfies your requirements. So, as your requirement, you need to extract features from a given java code. So, you can use javalang library to extract features from a given java code.
javalang is a pure Python library for working with Java source code.
javalang provides a lexer and parser targeting Java 8. The
implementation is based on the Java language spec available at
http://docs.oracle.com/javase/specs/jls/se8/html/.
you can refer it from - https://pypi.org/project/javalang/0.13.0/
Since javalang is a pure library it will help go forward on your research without any bugs

OSB fn:bea Function using XQuery Engine in Java

After some research I haven't found a solution, but quite alot of people with this problem:
I am trying to do a XQuery Transformation in a Java Application using
net.sf.saxon.s9api
However I get this error when trying to compile XQueryExecutable exec = compiler.compile(...)); my XQuery:
Error on line 13 column 3 of AivPumaRequest.xquery:
XPST0081 XQuery static error near #... fn-bea:inlinedXML(fn:concat#:
Prefix fn-bea has not been declared
Error on line 44 column 102 of AivPumaRequest.xquery:
XPST0081 XQuery static error near #... div xdt:dayTimeDuration('P1D'#:
Prefix xdt has not been declared
Error on line 199 column 3 of AivPumaRequest.xquery:
XPST0081 XQuery static error near #... fn-bea:inlinedXML(fn:concat#:
Prefix fn-bea has not been declared
Error on line 282 column 4 of AivPumaRequest.xquery:
XPST0081 XQuery static error near #... {fn-bea:inlinedXML(fn:concat#:
Prefix fn-bea has not been declared
net.sf.saxon.s9api.SaxonApiException: Prefix fn-bea has not been declared
Is there a way to static include this prefix or what am I missing so my XQuery Engine (SAXON) finds the Prefix?
The simple answer to your question is that you can declare namespace prefixes either within the query prolog using
declare namespace fn-bea = "http://some-appropriate-uri";
or in the Saxon API using
XQueryCompiler.declareNamespace("fn-bea", "http://some-appropriate-uri")
But this won't get you any further unless (a) you know what URI to bind the prefixes to, and (b) you make the functions with these names available to the query processor.
The reference to xdt:dayTimeDuration suggests to me that this query was written when XQuery was still a working draft. If you look at the 2005 working draft, for example
https://www.w3.org/TR/2005/CR-xquery-20051103/
you'll see in section 2 that it uses a built-in prefix
xdt = http://www.w3.org/2005/xpath-datatypes
By the time XQuery 1.0 became a recommendation, the dayTimeDuration data type had been moved into the standard XML Schema (xs) namespace, so you can probably simply replace "xdt" by "xs" - though you should be aware that the semantics of the language probably changed in minor details as well.
As for fn-bea:inlinedXML, the choice of prefix suggests to me that this was probably a built-in vendor extension in the BEA query processor, which was taken over by Oracle. The spec here:
https://docs.oracle.com/cd/E13162_01/odsi/docs10gr3/xquery/extensions.html
says:
fn-bea:inlinedXML Parses textual XML and returns an instance of the
XQuery 1.0 Data Model.
Which suggests that the function does something very similar to the XQuery 3.0 function fn:parse-xml(), and I suggest you try making that replacement in your query.

Getting error .. Unable to parse EL function ${class} [duplicate]

Currenty I have a web project with JSF 1.2 and Facelets running in tomcat 6.0.18.0. I decided to upgrade the servlet container, thus i deployed in tomcat 7 and all seemed ok until we hit one view using my custome facelet functions.
javax.el.ELException: Failed to parse the expression [{pz:instanceof(object,'com.project.domain.MyClass')}]
Caused by: org.apache.el.parser.ParseException: Encountered " ":" ": "" at line 1, column 5. Was expecting one of:
"}" ...
"." ...
"[" ...
This error occurs when parsing the following code:
<ui:repeat var="object" value="#{objects}">
<ui:fragment rendered="#{pz:instanceof(object,'com.project.domain.MyClass')}">
...
If i understand correctly it throws an error because of the colon in the expression . I have tracked it down to the jasper-el that come with in the tomcat/lib directory, and if I replace jasper.jar and jasper-el.jar with the ones from tomcat 6.0.18 everythign works well.
Has anyone else had this problem before upgrading their tomcat? And How did they resolve it?
Could I deploy in production tomcat 7 with these jasper jar from tomcat 6, or could this cause further problems.
This is actually a misleading exception. It has a different underlying cause. The function name instanceof is invalid.
The EL 2.2 specification says the following:
1.14 Reserved Words
The following words are reserved for the language and must not be used as
identifiers.
and eq gt true instanceof
or ne le false empty
not lt ge null div mod
Note that many of these words are not in the language now, but they may be in the
future, so developers must avoid using these words.
and
1.19 Collected Syntax
...
Identifier ::= Java language identifier
...
Where the Java language identifier stands for keywords like instanceof, if, while, class, return, static, new, etc. They may not be used as variable/function names in EL. In case you have properties with those names, use the brace notation instead like so #{bean['class'].simpleName} instead of #{bean.class.simpleName}.
This was been fixed in Tomcat 7.0.4 or somewhere near before this version as indicated by issue 50147 wherein someone else pointed out the same problem as you have. So, to solve your problem, you have to rename your EL function name to for example isInstanceOf or something.
Add this line in catalina.properties ([tomcat folder]/conf), and it should fix the issue.
org.apache.el.parser.SKIP_IDENTIFIER_CHECK=true
However, you should not use the reserved words.
You can also try changing the syntax. I had the same exact problem with code that I was maintaining when we were moving from Tomcat 6 to 7. I had to change myobject.class.name to myobject['class'].name. After I made this change my code worked perfectly again.
Great hint, indeed! I had to change in my jspx ${instance.class.simpleName == ...} with ${instance['class'].simpleName eq ...}.
I was moving from vFabric on tomcat 6 to vFabric on tomcat 7

Java Command Fails in NLTK Stanford POS Tagger

I request your kind help and assistance in solving the error of "Java Command Fails" which keeps throwing whenever I try to tag an Arabic corpus with size of 2 megabytes. I have searched the web and stanford POS tagger mailing list. However, I did not find the solution. I read some posts on problems similar to this, and it was suggested that the memory is used out. I am not sure of that. Still I have 19GB free memory. I tried every possible solution offered, but the same error keeps showing.
I have average command on Python and good command on Linux. I am using LinuxMint17 KDE 64-bit, Python3.4, NLTK alpha and Stanford POS tagger model for Arabic . This is my code:
import nltk
from nltk.tag.stanford import POSTagger
arabic_postagger = POSTagger("/home/mohammed/postagger/models/arabic.tagger", "/home/mohammed/postagger/stanford-postagger.jar", encoding='utf-8')
print("Executing tag_corpus.py...\n")
# Import corpus file
print("Importing data...\n")
file = open("test.txt", 'r', encoding='utf-8').read()
text = file.strip()
print("Tagging the corpus. Please wait...\n")
tagged_corpus = arabic_postagger.tag(nltk.word_tokenize(text))
IF THE CORPUS SIZE IS LESS THAN 1MB ( = 100,000 words), THERE WILL BE NO ERROR. BUT WHEN I TRY TO TAG 2MB CORPUS, THEN THE FOLLOWING ERROR MESSAGE IS SHOWN:
Traceback (most recent call last):
File "/home/mohammed/experiments/current/tag_corpus2.py", line 17, in <module>
tagged_lst = arabic_postagger.tag(nltk.word_tokenize(text))
File "/usr/local/lib/python3.4/dist-packages/nltk-3.0a3-py3.4.egg/nltk/tag/stanford.py", line 59, in tag
return self.batch_tag([tokens])[0]
File "/usr/local/lib/python3.4/dist-packages/nltk-3.0a3-py3.4.egg/nltk/tag/stanford.py", line 81, in batch_tag
stdout=PIPE, stderr=PIPE)
File "/usr/local/lib/python3.4/dist-packages/nltk-3.0a3-py3.4.egg/nltk/internals.py", line 171, in java
raise OSError('Java command failed!')
OSError: Java command failed!
I intend to tag 300 Million words to be used in my Ph.D. research project. If I keep tagging 100 thousand words at a time, I will have to repeat the task 3000 times. It will kill me!
I really appreciate your kind help.
After your import lines add this line:
nltk.internals.config_java(options='-xmx2G')
This will increase the maximum RAM size that java allows Stanford POS Tagger to use. The '-xmx2G' changes the maximum allowable RAM to 2GB instead of the default 512MB.
See What are the Xms and Xmx parameters when starting JVMs? for more information
If you're interested in how to debug your code, read on.
So we see that the command fail when handling huge amount of data so the first thing to look at is how the Java is initialized in NLTK before calling the Stanford tagger, from https://github.com/nltk/nltk/blob/develop/nltk/tag/stanford.py#L19 :
from nltk.internals import find_file, find_jar, config_java, java, _java_options
We see that the nltk.internals package is handling the different Java configurations and parameters.
Then we take a look at https://github.com/nltk/nltk/blob/develop/nltk/internals.py#L65 and we see that the no value is added for the memory allocation for Java.
In version 3.9.2, the StanfordTagger class constructor accepts a parameter called java_options which can be used to set the memory for the POSTagger and also the NERTagger.
E.g. pos_tagger = StanfordPOSTagger('models/english-bidirectional-distsim.tagger', path_to_jar='stanford-postagger-3.9.2.jar', java_options='-mx3g')
I found the answer by #alvas to not work because the StanfordTagger was overriding my memory setting with the built-in default of 1000m. Perhaps using nltk.internals.config_java after initializing StanfordPOSTagger might work but I haven't tried that.

Use Eclipse to generate the java Classes corresponding to a ASN.1description

I installed this ASN.1 editor plugin to in my Eclipse and i wrote this code
Editor http://asneditor.sourceforge.net
-- Created: Mon May 06 19:38:15 CEST 2013 ASN-Module DEFINITIONS AUTOMATIC TAGS ::= BEGIN
Client ::= SEQUENCE { lientNumber INTEGER}
Server ::= SEQUENCE { lientNumber INTEGER, serverString String } END
But I don't found how to use this Editor to generate the java Classes corresponding to my ASN.1 description
You need to use an ASN compiler for that. You can try this one:
http://sourceforge.net/projects/jac-asn1/
A glance at the website suggests this plugin does not have the functionality you desire. It appears to be designed to make it easier to write ASN.1 grammars, but not to generate Java code for them.
I would suggest you Google for "asn1 java compiler" and try some of the current popular options.

Categories

Resources