My problem is that after closing eclipse the saving workspace didn't finish even after two hours and make me cant do anything on eclipse at all. I checked in
C:..\workspace.metadata.log and the error is :
!ENTRY org.eclipse.jdt.ui 4 10001 2016-12-25 12:51:13.037
!MESSAGE Internal Error
!STACK 1
Java Model Exception: Core Exception [code 3] Some characters cannot be mapped using "Cp1252" character encoding.
Either change the encoding or remove the characters which are not supported by the "Cp1252" character encoding.
at org.eclipse.jdt.internal.ui.javaeditor.DocumentAdapter.save(DocumentAdapter.java:474)
at org.eclipse.jdt.internal.core.CommitWorkingCopyOperation.executeOperation(CommitWorkingCopyOperation.java:123)
at org.eclipse.jdt.internal.core.JavaModelOperation.run(JavaModelOperation.java:729)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2313)
at org.eclipse.jdt.internal.core.JavaModelOperation.runOperation(JavaModelOperation.java:794)
at org.eclipse.jdt.internal.core.CompilationUnit.commitWorkingCopy(CompilationUnit.java:391)
at org.eclipse.jdt.ui.wizards.NewTypeWizardPage.createType(NewTypeWizardPage.java:2233)
at org.eclipse.jdt.internal.ui.wizards.NewClassCreationWizard.finishPage(NewClassCreationWizard.java:71)
at org.eclipse.jdt.internal.ui.wizards.NewElementWizard$2.run(NewElementWizard.java:118)
at org.eclipse.jdt.internal.core.BatchOperation.executeOperation(BatchOperation.java:39)
at org.eclipse.jdt.internal.core.JavaModelOperation.run(JavaModelOperation.java:729)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2313)
at org.eclipse.jdt.core.JavaCore.run(JavaCore.java:5358)
at org.eclipse.jdt.internal.ui.actions.WorkbenchRunnableAdapter.run(WorkbenchRunnableAdapter.java:106)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:122)
Caused by: java.nio.charset.UnmappableCharacterException: Input length = 1
at java.nio.charset.CoderResult.throwException(Unknown Source)
at java.nio.charset.CharsetEncoder.encode(Unknown Source)
at org.eclipse.core.internal.filebuffers.ResourceTextFileBuffer.commitFileBufferContent(ResourceTextFileBuffer.java:366)
at org.eclipse.core.internal.filebuffers.ResourceFileBuffer.commit(ResourceFileBuffer.java:327)
at org.eclipse.jdt.internal.ui.javaeditor.DocumentAdapter.save(DocumentAdapter.java:472)
... 14 more
Caused by: org.eclipse.core.runtime.CoreException: Some characters cannot be mapped using "Cp1252" character encoding.
Either change the encoding or remove the characters which are not supported by the "Cp1252" character encoding.
at org.eclipse.core.internal.filebuffers.ResourceTextFileBuffer.commitFileBufferContent(ResourceTextFileBuffer.java:378)
at org.eclipse.core.internal.filebuffers.ResourceFileBuffer.commit(ResourceFileBuffer.java:327)
at org.eclipse.jdt.internal.ui.javaeditor.DocumentAdapter.save(DocumentAdapter.java:472)
at org.eclipse.jdt.internal.core.CommitWorkingCopyOperation.executeOperation(CommitWorkingCopyOperation.java:123)
at org.eclipse.jdt.internal.core.JavaModelOperation.run(JavaModelOperation.java:729)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2313)
at org.eclipse.jdt.internal.core.JavaModelOperation.runOperation(JavaModelOperation.java:794)
at org.eclipse.jdt.internal.core.CompilationUnit.commitWorkingCopy(CompilationUnit.java:391)
at org.eclipse.jdt.ui.wizards.NewTypeWizardPage.createType(NewTypeWizardPage.java:2233)
at org.eclipse.jdt.internal.ui.wizards.NewClassCreationWizard.finishPage(NewClassCreationWizard.java:71)
at org.eclipse.jdt.internal.ui.wizards.NewElementWizard$2.run(NewElementWizard.java:118)
at org.eclipse.jdt.internal.core.BatchOperation.executeOperation(BatchOperation.java:39)
at org.eclipse.jdt.internal.core.JavaModelOperation.run(JavaModelOperation.java:729)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2313)
at org.eclipse.jdt.core.JavaCore.run(JavaCore.java:5358)
at org.eclipse.jdt.internal.ui.actions.WorkbenchRunnableAdapter.run(WorkbenchRunnableAdapter.java:106)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:122)
Caused by: java.nio.charset.UnmappableCharacterException: Input length = 1
at java.nio.charset.CoderResult.throwException(Unknown Source)
at java.nio.charset.CharsetEncoder.encode(Unknown Source)
at org.eclipse.core.internal.filebuffers.ResourceTextFileBuffer.commitFileBufferContent(ResourceTextFileBuffer.java:366)
... 16 more
The problem is I dont know how can i fixed it without opening eclipse (I cant because its saving workspace).
I'll be greatfull for help.
PS. Sorry for my english I'm not native speaker.
In top menu of Eclipse go to Window Menu
Windows Menu –> Preferences –> General (expand it) –> Workspace
(click on it).
Look for a box “Text File Encoding”. Default will be
“Cp1252″.
Change radio to select other and select “UTF-8″ from combo
box.
After setting this UTF-8 encoding, you can use UTF-8 in your code. Now you won’t get the ‘CP1252 character encoding’ error.
Related
I'm trying to run mahout SGD classifier on a CSV file, and I'm getting this error -
[vineet#localhost bin]$ ./mahout trainlogistic --input ./filtered.csv --output model --target target --categories 33 \
--features 200 --passes 10 --predictors subject --types text --rate 50
hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 6, Size: 4
at java.util.ArrayList.rangeCheck(ArrayList.java:604)
at java.util.ArrayList.get(ArrayList.java:382)
at org.apache.mahout.classifier.sgd.CsvRecordFactory.processLine(CsvRecordFactory.java:245)
at org.apache.mahout.classifier.sgd.TrainLogistic.mainToOutput(TrainLogistic.java:85)
at org.apache.mahout.classifier.sgd.TrainLogistic.main(TrainLogistic.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
The CSV file contains unicode text, and large text fields enclosed by quote characters.
I've tried the classifier on the sample donut.csv, and it works fine.
I also tried changing my header row to make it like "id","subject","field2",etc.., but it still doesn't work.
What am I doing wrong?
some lines may dirty - only have 4 attributes instead of 6. check your data again or try to feed only one line of data to validate my guess.
I have some problems to create a new model for Stanford Parser.
I have also downloaded the last version from Stanford:
http://nlp.stanford.edu/software/lex-parser.shtml
And here, Genia Corpus in 2 formats, xml and ptb (Penn Treebank).
Standford Parser can train with ptd files ; then I downloaded Genia Corpus, because I want to work with biomedical text:
http://categorizer.tmit.bme.hu/~illes/genia_ptb/ (link no longer available) (genia_ptb.tar.gz)
Then, I have a short Main class to get dependency representation of one biomedical sentence:
String treebankPath = "/stanford-parser-2012-05-22/genia_ptb/GENIA_treebank_v1/ptb";
Treebank tr = op.tlpParams.diskTreebank();
tr.loadPath(treebankPath);
LexicalizedParser lpc=LexicalizedParser.trainFromTreebank(tr,op);
I have tried different ways, but always get the same result.
I have an error in the last line. This is my output:
Currently Fri Jun 01 15:02:57 CEST 2012
Options parameters:
useUnknownWordSignatures 2
smoothInUnknownsThreshold 100
smartMutation false
useUnicodeType false
unknownSuffixSize 1
unknownPrefixSize 1
flexiTag true
useSignatureForKnownSmoothing false
parserParams edu.stanford.nlp.parser.lexparser.EnglishTreebankParserParams
forceCNF false
doPCFG true
doDep false
freeDependencies false
directional true
genStop true
distance true
coarseDistance false
dcTags false
nPrune false
Train parameters: smooth=false PA=true GPA=false selSplit=true (400.0; deleting [VP^SQ, VP^VP, VP^SINV, VP^NP]) mUnary=1 mUnaryTags=false sPPT=false tagPA=true tagSelSplit=false (0.0) rightRec=true leftRec=false collinsPunc=false markov=true mOrd=2 hSelSplit=true (10) compactGrammar=3 postPA=false postGPA=false selPSplit=false (0.0) tagSelPSplit=false (0.0) postSplitWithBase=false fractionBeforeUnseenCounting=0.5 openClassTypesThreshold=50 preTransformer=null taggedFiles=null
Using EnglishTreebankParserParams splitIN=4 sPercent=true sNNP=0 sQuotes=false sSFP=false rbGPA=false j#=false jJJ=false jNounTags=false sPPJJ=false sTRJJ=false sJJCOMP=false sMoreLess=false unaryDT=true unaryRB=true unaryPRP=false reflPRP=false unaryIN=false sCC=1 sNT=false sRB=false sAux=2 vpSubCat=false mDTV=2 sVP=3 sVPNPAgr=false sSTag=0 mVP=false sNP%=0 sNPPRP=false dominatesV=1 dominatesI=false dominatesC=false mCC=0 sSGapped=4 numNP=false sPoss=1 baseNP=1 sNPNNP=0 sTMP=1 sNPADV=1 cTags=true rightPhrasal=false gpaRootVP=false splitSbar=0 mPPTOiIN=0
Binarizing trees...done. Time elapsed: 141 ms
Extracting PCFG...done. Time elapsed: 56 ms
Compiling grammar...done Time elapsed: 1 ms
Extracting Lexicon...Exception in thread "main" edu.stanford.nlp.util.ReflectionLoading$ReflectionLoadingException: edu.stanford.nlp.util.MetaClass$ClassCreationException: java.lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.EnglishUnknownWordModelTrainer
at edu.stanford.nlp.util.ReflectionLoading.loadByReflection(ReflectionLoading.java:39)
at edu.stanford.nlp.parser.lexparser.BaseLexicon.initializeTraining(BaseLexicon.java:335)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromTreebank(LexicalizedParser.java:800)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.trainFromTreebank(LexicalizedParser.java:226)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.trainFromTreebank(LexicalizedParser.java:237)
at ABravoDemo.main(ABravoDemo.java:35)
Caused by: edu.stanford.nlp.util.MetaClass$ClassCreationException: java.lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.EnglishUnknownWordModelTrainer
at edu.stanford.nlp.util.MetaClass.createFactory(MetaClass.java:353)
at edu.stanford.nlp.util.MetaClass.createInstance(MetaClass.java:370)
at edu.stanford.nlp.util.ReflectionLoading.loadByReflection(ReflectionLoading.java:37)
... 5 more
Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.EnglishUnknownWordModelTrainer
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:303)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:316)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at edu.stanford.nlp.util.MetaClass$ClassFactory.construct(MetaClass.java:119)
at edu.stanford.nlp.util.MetaClass$ClassFactory.<init>(MetaClass.java:192)
at edu.stanford.nlp.util.MetaClass$ClassFactory.<init>(MetaClass.java:53)
at edu.stanford.nlp.util.MetaClass.createFactory(MetaClass.java:349)
... 7 more
How could I create a new model with this corpus ?
As andrucz stated in his comment, the real cause of your problem seems to stem from a missing class.
Try checking whether you correctly imported your library ( and make sure that it contains the class EnglishUnknownWordModelTrainer in edu.stanford.nlp.parser.lexparser.
(If you're using Maven, verify that you correctly added the dependency - a quick google brougt this up : Stanford Parser Maven Repo )
Did the NLP library install correctly?
Check in the logs to verify there are no errors. Most of the times this issue comes when there the stanford nltk library does not install correctly.
A quick way to check is by running the GUI to try out the parser if that runs successfully then the library installed correctly otherwise if that throws errors then you know your installation was poor.
The Stanford website also mentions this take a look:
If you're new to parsing, you can start by running the GUI to try out the parser. Scripts are included for linux (lexparser-gui.sh) and Windows (lexparser-gui.bat).
Take a look at the Javadoc lexparser package documentation and LexicalizedParser class documentation. (Point your web browser at the index.html file in the included javadoc directory and navigate to those items.)
Look at the parser FAQ for answers to common questions.
If none of that helps, please see our email guidelines for instructions on how to reach us for further assistance.
Check whether you have correctly imported library and make sure that it is containing the class {EnglishUnknownWordModelTrainer} and also make sure that version you downloaded properly works with Genia Corps.
I frequently get what appears to be a stackoverflow error ;-) from YUICompressor. The following is the first part of thousands of error lines that come from attempting to compress a 24074 byte css stylesheet (not the "Caused by java.lang.StackOverflowError about 8 lines down):
iMac1:src jas$ min ../style2.min.css style2.css
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.yahoo.platform.yui.compressor.Bootstrap.main(Bootstrap.java:21)
Caused by: java.lang.StackOverflowError
at java.lang.Character.codePointAt(Character.java:2335)
at java.util.regex.Pattern$CharProperty.match(Pattern.java:3344)
at java.util.regex.Pattern$Branch.match(Pattern.java:4114)
... (plus 1021 more error lines)
The errors happen usually after adding a couple of lines to the file getting compressed. The css is fine, and works perfectly in the uncompressed format. I don't see a particular pattern to the types of selectors added to the file that cause the errors. In this case, adding the following selector to a previously compressible file resulted in the errors:
#thisisatest
{
margin-left:87px;
}
I am wondering if there is perhaps a flag to java to enlarge the stack that might help. Or if that is not the problem, what is?
EDIT:
As I was posting this question, it dawned on me that I should check the java command to see if there was a parameter to enlarge the stack. Turns out that it is -Xssn, where "n" is a parameter to indicate the stack size. Its default value is 512k. So I tried 1024k but that still led to the stackoverflow. Trying 2048k works however, and I think this could be the solution.
EDIT 2:
While I no longer use this method for minification any longer, to be more specific here is the full command (which I have set up as a shell alias), showing how the -Xss2048k parameter is used:
java -Xss2048k -jar ~/Documents/RepHunter/Website\ Materials/Code/Third\ Party\ Libraries/YUI\ Compressor/yuicompressor-2.4.8.jar --type css -o
As posted in my edit, the solution was to add the parameters to the java command. The clue was the error line at the 5-th "at" line, as follows:
at com.yahoo.platform.yui.compressor.Bootstrap.main(Bootstrap.java:21)
Caused by: java.lang.StackOverflowError
Seeing that the issue was a "StackOverlowError" ;-) gave the suggestion to try to increase the stack size. The default is 512k. My first try of 1024k did not work. However increasing it to 2048k did work, and I have had no further issues.
It appears that Oracle's java client has a bug - if the tnsnames.ora file has misplaced spaces/tabs/new-lines in particular places, you get an exception with the following trace:
java.lang.ArrayIndexOutOfBoundsException: <some number>
at oracle.net.nl.NVTokens.parseTokens(Unknown Source)
at oracle.net.nl.NVFactory.createNVPair(Unknown Source)
at oracle.net.nl.NLParamParser.addNLPListElement(Unknown Source)
at oracle.net.nl.NLParamParser.initializeNlpa(Unknown Source)
at oracle.net.nl.NLParamParser.<init>(Unknown Source)
at oracle.net.resolver.TNSNamesNamingAdapter.loadFile(Unknown Source)
at oracle.net.resolver.TNSNamesNamingAdapter.checkAndReload(Unknown Source)
at oracle.net.resolver.TNSNamesNamingAdapter.resolve(Unknown Source)
at oracle.net.resolver.NameResolver.resolveName(Unknown Source)
at oracle.net.resolver.AddrResolution.resolveAndExecute(Unknown Source)
at oracle.net.ns.NSProtocol.establishConnection(Unknown Source)
at oracle.net.ns.NSProtocol.connect(Unknown Source)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1037)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:282)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:468)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:839)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
If you take a C++ application and try to connect with it to the database with the same tnsnames.ora in use - it works fine. Same goes for sqlplus. Also tnsping which should parse this file has no problem resolving any service name. Seems like Oracle were too lazy to .trim() the values or something - and it is the same problem with Oracle client versions 9, 10 and 11.
Any idea why this problem exists and what is the exact problem with the tnsnames.ora format? (I just remove all white-spaces to resolve it)
I tried the advice from GriffeyDog but unfortunately it did not solve the problem - so eventually I too the check for your self approach:
Oracle's documentation states that the structure of a record in the tnsnames.ora file should be as such:
net_service_name=
(DESCRIPTION=
(ADDRESS=...)
(ADDRESS=...)
(CONNECT_DATA=
(SERVICE_NAME=sales.us.example.com)))
Ours was:
net_service_name=
(DESCRIPTION=
(ADDRESS=...)
(ADDRESS=...)
(CONNECT_DATA=
(SERVICE_NAME=sales.us.example.com)))
Apparently the indentation is crucial - if any of the lines in the block of a single net_service_name start at index 1 - this exception is thrown.
Only once you add indentation to all (can be spaces or tab) - it works. It doesn't have to look good, but has to have an offset of some sort.
Important note - the only problem is with '(', indentation rules don't apply to ')'.
E.g. the below example is perfectly fine:
net_service_name=
(DESCRIPTION=
(ADDRESS=...
)
(ADDRESS=...
)
(CONNECT_DATA=
(SERVICE_NAME=sales.us.example.com))
)
After searching for this issue to be documented - I finally found out that indeed it is documented at http://download.oracle.com/docs/cd/A57673_01/DOC/net/doc/NWUS233/apb.htm
And here is the important excerpt:
Even if you do not choose to indent your files in this way, you must indent a wrapped line by at least one space, or it will be misread as a new parameter. The following layout is acceptable:
(ADDRESS=(COMMUNITY=tcpcom.world)(PROTOCOL=tcp)
(HOST=max.world)(PORT=1521))
The following layout is not acceptable:
(ADDRESS=(COMMUNITY=tcpcom.world)(PROTOCOL=tcp)
(HOST=max.world)(PORT=1521))
I've seen similar issues arise when a text file is saved with Unix-style end-of-lines (LF) versus DOS/Windows-style (CR/LF), or vice-versa. You might try opening your tnsnames.ora file with an editor that supports saving in both formats to see if you can correct the problem that way.
I'm working with a legacy Java app that has no logging and just prints all information to the console. Most exceptions are also "handled" by just doing a printStackTrace() call.
In a nutshell, I've just redirected the System.out and System.error streams to a log file, and now I need to parse that log file. So far all good, but I'm having problems trying to parse the log file for stack traces.
Some of the code is obscufated as well, so I need to run the stacktraces through a utility app to de-obscufate them. I'm trying to automate all of this.
The closest I've come so far is to get the initial Exception line using this:
.+Exception[^\n]+
And finding the "at ..(..)" lines using:
(\t+\Qat \E.+\s+)+
But I can't figure out how to put them together to get the full stacktrace.
Basically, the log files looks something like the following. There is no fixed structure and the lines before and after stack traces are completely random:
Modem ERROR (AT
Owner: CoreTalk
) - TIMEOUT
IN []
Try Open: COM3
javax.comm.PortInUseException: Port currently owned by CoreTalk
at javax.comm.CommPortIdentifier.open(CommPortIdentifier.java:337)
...
at UniPort.modemService.run(modemService.java:103)
Handling file: C:\Program Files\BackBone Technologies\CoreTalk 2006\InputXML\notify
java.io.FileNotFoundException: C:\Program Files\BackBone Technologies\CoreTalk 2006\InputXML\notify (The system cannot find the file specified)
at java.io.FileInputStream.open(Native Method)
...
at com.gobackbone.Store.a.a.handle(Unknown Source)
at com.jniwrapper.win32.io.FileSystemWatcher.fireFileSystemEvent(FileSystemWatcher.java:223)
...
at java.lang.Thread.run(Unknown Source)
Load Additional Ports
... Lots of random stuff
IN []
[Fatal Error] .xml:6:114: The entity name must immediately follow the '&' in the entity reference.
org.xml.sax.SAXParseException: The entity name must immediately follow the '&' in the entity reference.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source)
...
at com.gobackbone.Store.a.a.run(Unknown Source)
Looks like you just need to paste them together (and use a newline as glue):
.+Exception[^\n]+\n(\t+\Qat \E.+\s+)+
But I would change your regex a bit:
^.+Exception[^\n]++(\s+at .++)+
This combines the whitespace between the at... lines and uses possessive quantifiers to avoid backtracking.
We have been using ANTLR to tackle the parsing of logfiles (in a different application area). It's not trivial but if this is a critical task for you it will be better than using regexes.
I get good results using
perl -n -e 'm/(Exception)|(\tat )/ && print' /var/log/jboss4.2/debian/server.log
It dumps all lines which have Exception or \tat in them. Since the match is in the same time the order is kept.