I have 3 h2o models:
$ ls dataset/mojo
1. DeepLearning_model_python_1582176092021_2.zip
2. StackedEnsemble_BestOfFamily_AutoML_20200220_073620.zip
3. Word2Vec_model_python_1582176092021_1.zip
The binary models for these 3 were generated on v3.28.0.3, but I am trying to upgrade the h2o version and productionize it onto v3.30.0.5
So i converted those 3 binaries successfully to MOJO models (as listed above)
When trying to upload these mojo models using the h2o.upload_mojo, for Word2Vec alone, am getting the error:
In [15]: w2v_path = 'dataset/mojo/Word2Vec_model_python_1582176092021_1.zip'
In [16]: w2v_model = h2o.upload_mojo(w2v_path)
generic Model Build progress: | (failed) | 0%
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-16-734005ed70a8> in <module>
----> 1 w2v_model = h2o.upload_mojo(w2v_path)
~/.envs/h2o-test/lib/python3.8/site-packages/h2o/h2o.py in upload_mojo(mojo_path)
2149 frame_key = response["destination_frame"]
2150 mojo_estimator = H2OGenericEstimator(model_key = get_frame(frame_key))
-> 2151 mojo_estimator.train()
2152 print(mojo_estimator)
2153 return mojo_estimator
~/.envs/h2o-test/lib/python3.8/site-packages/h2o/estimators/estimator_base.py in train(self, x, y, training_frame, offset_column, fold_column, weights_column, validation_frame, max_runtime_secs, ignored_columns, model_id, verbose)
113 validation_frame=validation_frame, max_runtime_secs=max_runtime_secs,
114 ignored_columns=ignored_columns, model_id=model_id, verbose=verbose)
--> 115 self._train(parms, verbose=verbose)
116
117 def train_segments(self, x=None, y=None, training_frame=None, offset_column=None, fold_column=None,
~/.envs/h2o-test/lib/python3.8/site-packages/h2o/estimators/estimator_base.py in _train(self, parms, verbose)
205 return
206
--> 207 job.poll(poll_updates=self._print_model_scoring_history if verbose else None)
208 model_json = h2o.api("GET /%d/Models/%s" % (rest_ver, job.dest_key))["models"][0]
209 self._resolve_model(job.dest_key, model_json)
~/.envs/h2o-test/lib/python3.8/site-packages/h2o/job.py in poll(self, poll_updates)
75 if self.status == "FAILED":
76 if (isinstance(self.job, dict)) and ("stacktrace" in list(self.job)):
---> 77 raise EnvironmentError("Job with key {} failed with an exception: {}\nstacktrace: "
78 "\n{}".format(self.job_key, self.exception, self.job["stacktrace"]))
79 else:
OSError: Job with key $03010a64051932d4ffffffff$_8d0c64127137bd1eef16202889cf4fca failed with an exception: java.lang.IllegalArgumentException: Unsupported MOJO model 'word2vec'.
stacktrace:
java.lang.IllegalArgumentException: Unsupported MOJO model 'word2vec'.
at hex.generic.Generic$MojoDelegatingModelDriver.computeImpl(Generic.java:99)
at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:248)
at hex.generic.Generic$MojoDelegatingModelDriver.compute2(Generic.java:78)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1557)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
The other two models succeed without any issues, and returns a valid model_id. Any idea what the issue here is, coz from docs its understood that all three model types are supported by MOJO
I tried this with a cluster of 2 pods on K8s with 2Gi/1cpu memory each, but results in same outcome as above.
Word2Vec is not currently in the list of allowed algos to import back into H2O.
The documentation is a little bit confusing and needs improvement. MOJO is a way to take H2O models into production. Those are usable outside of H2O using H2O's genmodel. Some of those MOJOs are importable back into H2O and inspected. But not all of them. The first two algorithms listed are supported. Unfortunately, Word2Vec is not.
I've created a JIRA to track this issue. We should be able to enable at least scoring.
Related
I am decompiling java application, and i have already done with 99% .class files. But, I have a problem with couple of them: error while decompilation (errors are same type).
Example:
Procyon: java.lang.IllegalArgumentException: Argument 'index' must be in the range [0, 63], but value was: 15873...
CFR:
Can not load the class specified:
org.benf.cfr.reader.util.CannotLoadClassException: Modules_4.class - java.lang.IndexOutOfBoundsException: Constant pool has 62 entries - attempted to access entry #30318
JDCore: returns null
Jadx:
ERROR - jadx error: Error load file: Modules_4.class
jadx.core.utils.exceptions.JadxRuntimeException: Error load file: Modules_4.class
at jadx.api.JadxDecompiler.loadFiles(JadxDecompiler.java:121)
at jadx.api.JadxDecompiler.load(JadxDecompiler.java:88)
at jadx.cli.JadxCLI.processAndSave(JadxCLI.java:34)
at jadx.cli.JadxCLI.main(JadxCLI.java:19)
Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 15873 out of bounds for length 63
Fernflower:
Job Output:
java.lang.IndexOutOfBoundsException: Index 15873 out of bounds for length 63
at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)...
JAD:
Parsing Modules_4.class...The class file version is 50.0 (only 45.3, 46.0 and 47.0 are supported)
ItemCollectionInvalidIndex: constants: requested 15873, limit 63
download .class file
What is wrong?
There is nothing wrong with all decompilers i have mentioned before.
It was a constant_pool_count issue. It happened because of some JPHP decompiler offset troubles. So, if you are trying to reverse jphp applications, use your own software to delim .phb to .class blocks with couple of bytes before each of them
I am trying to build a deep learning model with transformer model architecture. In that case when I am trying to cleaning the dataset following error occurred.
I am using Pytorch and google colab for that case & trying to clean Java methods and comment dataset.
Tested Code
import re
from fast_trees.core import FastParser
parser = FastParser('java')
def get_cmt_params(cmt: str) -> List[str]:
'''
Grabs the parameter identifier names from a JavaDoc comment
:param cmt: the comment to extract the parameter identifier names from
:returns: an array of the parameter identifier names found in the given comment
'''
params = re.findall('#param+\s+\w+', cmt)
param_names = []
for param in params:
param_names.append(param.split()[1])
return param_name
Occured Error
Downloading repo https://github.com/tree-sitter/tree-sitter-java to /usr/local/lib/python3.7/dist-packages/fast_trees/tree-sitter-java.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-31-64f6fa6ed39b> in <module>()
3 from fast_trees.core import FastParser
4
----> 5 parser.set_language = FastParser('java')
6
7 def get_cmt_params(cmt: str) -> List[str]:
3 frames
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in FastParser(lang)
96 }
97
---> 98 return PARSERS[lang]()
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in __init__(self)
46
47 def __init__(self):
---> 48 super().__init__()
49
50 def get_method_parameters(self, mthd: str) -> List[str]:
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in __init__(self)
15 class BaseParser:
16 def __init__(self):
---> 17 self.build_parser()
18
19 def build_parser(self):
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in build_parser(self)
35 self.language = Language(build_dir, self.LANG)
36 self.parser = Parser()
---> 37 self.parser.set_language(self.language)
38
39 # Cell
ValueError: Incompatible Language version 13. Must not be between 9 and 12
an anybody help me to solve this issue?
fast_trees uses tree_sitter and according to tree_sitter repo it is an incomatibility issue. If you know the owner of fast_trees ask them to upgrade their tree_sitter version.
Or you can fork it and upgrade it yourself, but keep in mind it may not be backwards compatible if you take it upon yourself and it may not be just a simple new version install.
The fast-trees library uses the tree-sitter library and since they recommended using the 0.2.0 version of tree-sitter in order to use fast-trees. Although downgrade the tree-sitter to the 0.2.0 version will not be resolved your problem. I also tried out it by downgrading it.
So, without investing time to figure out the bug in tree-sitter it is better to move to another stable library that satisfies your requirements. So, as your requirement, you need to extract features from a given java code. So, you can use javalang library to extract features from a given java code.
javalang is a pure Python library for working with Java source code.
javalang provides a lexer and parser targeting Java 8. The
implementation is based on the Java language spec available at
http://docs.oracle.com/javase/specs/jls/se8/html/.
you can refer it from - https://pypi.org/project/javalang/0.13.0/
Since javalang is a pure library it will help go forward on your research without any bugs
I'm building an android rom from the android source code but after about 5 minutes it gives this error.
error: ro.build.fingerprint cannot exceed 91 bytes: Android/mini_emulator_x86/mini-emulator-x86:5.0.555/AOSP/username02280306:userdebug/test-keys (97)
make: *** [out/target/product/mini-emulator-x86/system/build.prop] Error 1
make: *** Deleting file `out/target/product/mini-emulator-x86/system/build.prop'
make: *** Waiting for unfinished jobs....
How do I increase the ro.build.fingerprint size limit?
Plus I'm building on a Mac.
Edit build/tools/post_process_props.py. Change lines as follows:
PROP_NAME_MAX = 31
# PROP_VALUE_MAX = 91
PROP_VALUE_MAX = 128
Edit bionic/libc/include/sys/system_properties.h. Change lines as follows:
#define PROP_NAME_MAX 32
// #define PROP_VALUE_MAX 92
#define PROP_VALUE_MAX 128
Do
make clean
make
You can also run the second make command in parallel using syntax such as
make -j8
Alternatively, you can specify the build fingerprint string as command line argument to make using:
make -j5 BUILD_FINGERPRINT="....."
This will allow you to stay within the 91 byte limit.
We need to implement an application for evaluating results of an online programming challenge. The users will implement the programming challenge and compile their source through a web interface. We are supposed to compile the submitted sources on the fly and present some statistics of the program like expected memory consumption and possible performance indicators of the sources. Does anybody know how can we gather memory consumption and performance indicators of the program statically from the sources?
While you could possibly do static analysis of the source to infer performance characteristics, I suspect it would be far simpler to just run a JUnit test suite over the code.
If you can present your challenge as a code stub or interface, you should be able to create a suitable JUnit suite which validates correctness and tests performance.
Granted, JUnit may not be the best way of running performance tests but you can likely bend it to the task. Alternatively you could look at JMeter or something similar.
Found something very useful. I am not sure if this is what I am looking for. I am yet to analyse the results. But this is quite interesting.
We can gather some performance statistics using the HPROF profiler agent shipped with the JDK release. The good thing is that it can be run during the compilation to produce some interesting statistics on the code beign compiled. Following are some samples. More details can be found at http://download.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/tooldescr.html#gbluz
$ javac -J-agentlib:hprof=heap=sites Hello.java
SITES BEGIN (ordered by live bytes) Wed Oct 4 13:13:42 2006
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 44.13% 44.13% 1117360 13967 1117360 13967 301926 java.util.zip.ZipEntry
2 8.83% 52.95% 223472 13967 223472 13967 301927 com.sun.tools.javac.util.List
3 5.18% 58.13% 131088 1 131088 1 300996 byte[]
4 5.18% 63.31% 131088 1 131088 1 300995 com.sun.tools.javac.util.Name[]
$ javac -J-agentlib:hprof=heap=dump Hello.java
HEAP DUMP BEGIN (39793 objects, 2628264 bytes) Wed Oct 4 13:54:03 2006
ROOT 50000114 (kind=<thread>, id=200002, trace=300000)
ROOT 50000006 (kind=<JNI global ref>, id=8, trace=300000)
ROOT 50008c6f (kind=<Java stack>, thread=200000, frame=5)
:
CLS 50000006 (name=java.lang.annotation.Annotation, trace=300000)
loader 90000001
OBJ 50000114 (sz=96, trace=300001, class=java.lang.Thread#50000106)
name 50000116
group 50008c6c
contextClassLoader 50008c53
inheritedAccessControlContext 50008c79
blockerLock 50000115
OBJ 50008c6c (sz=48, trace=300000, class=java.lang.ThreadGroup#50000068)
name 50008c7d
threads 50008c7c
groups 50008c7b
ARR 50008c6f (sz=16, trace=300000, nelems=1,
elem type=java.lang.String[]#5000008e)
[0] 500007a5
CLS 5000008e (name=java.lang.String[], trace=300000)
super 50000012
loader 90000001
:
HEAP DUMP END
$ javac -J-agentlib:hprof=cpu=times Hello.java
CPU TIME (ms) BEGIN (total = 2082665289) Wed oct 4 13:43:42 2006
rank self accum count trace method
1 3.70% 3.70% 1 311243 com.sun.tools.javac.Main.compile
2 3.64% 7.34% 1 311242 com.sun.tools.javac.main.Main.compile
3 3.64% 10.97% 1 311241 com.sun.tools.javac.main.Main.compile
4 3.11% 14.08% 1 311173 com.sun.tools.javac.main.JavaCompiler.compile
5 2.54% 16.62% 8 306183 com.sun.tools.javac.jvm.ClassReader.listAll
6 2.53% 19.15% 36 306182 com.sun.tools.javac.jvm.ClassReader.list
7 2.03% 21.18% 1 307195 com.sun.tools.javac.comp.Enter.main
8 2.03% 23.21% 1 307194 com.sun.tools.javac.comp.Enter.complete
9 1.68% 24.90% 1 306392 com.sun.tools.javac.comp.Enter.classEnter
10 1.68% 26.58% 1 306388 com.sun.tools.javac.comp.Enter.classEnter
...
CPU TIME (ms) END
I'm trying to install WWW::HTMLUnit on Windows 7. There're step that I run through:
Install Inline::Java 0.53
Install WWW::HTMLUnit 0.15
At step 2, after nmake, I type nmake test to test module but it failed. Here's output:
C:\nmake test
Microsoft (R) Program Maintenance Utility Version 9.00.30729.01
Copyright (C) Microsoft Corporation. All rights reserved.
C:\Perl\bin\perl.exe "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib\lib', 'blib\arch')" t/*.t
t/00_basic...........
t/00_basic...........NOK 1/1# Failed test 'use WWW::HtmlUnit;'
# at t/00_basic.t line 9.
# Tried to use 'WWW::HtmlUnit'.
# Error: Class com.gargoylesoftware.htmlunit.WebClient not found at C:/Perl/site/lib/Inline/Java.pm line 619
# BEGIN failed--compilation aborted at (eval 4) line 2, <GEN7> line 4.
# Looks like you failed 1 test of 1.
t/00_basic...........dubious
Test returned status 1 (wstat 256, 0x100)
DIED. FAILED test 1
Failed 1/1 tests, 0.00% okay
t/01_hello...........Class com.gargoylesoftware.htmlunit.WebClient not found at C:/Perl/site/lib/Inline/Java.pm line 619
BEGIN failed--compilation aborted at t/01_hello.t line 4, <GEN7> line 4.
t/01_hello...........dubious
Test returned status 26 (wstat 6656, 0x1a00)
t/02_hello_sweet.....dubious
Test returned status 19 (wstat 4864, 0x1300)
t/03_clickhandler....Class com.gargoylesoftware.htmlunit.WebClient not found at C:/Perl/site/lib/Inline/Java.pm line 619
BEGIN failed--compilation aborted at t/03_clickhandler.t line 6, <GEN7> line 4.
t/03_clickhandler....dubious
Test returned status 29 (wstat 7424, 0x1d00)
DIED. FAILED tests 1-8
Failed 8/8 tests, 0.00% okay
Failed Test Stat Wstat Total Fail List of Failed
-------------------------------------------------------------------------------
t/00_basic.t 1 256 1 1 1
t/01_hello.t 26 6656 ?? ?? ??
t/02_hello_sweet.t 19 4864 ?? ?? ??
t/03_clickhandler.t 29 7424 8 16 1-8
Failed 4/4 test scripts. 9/9 subtests failed.
Files=4, Tests=9, 3 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU)
Failed 4/4 test programs. 9/9 subtests failed.
NMAKE : fatal error U1077: 'C:\Perl\bin\perl.exe' : return code '0x1d'
Stop.
From above log, I could see that:
class Error: com.gargoylesoftware.htmlunit.WebClient could not be found.
I have no idea that I missed anything.
Any help would be appreciated.
Thanks.
Minh.
I found it.
There's different between path in Unix and Windows system. Unix uses ':' for a delimiter but Windows uses ';'. So what I've done is that open HTMLUnit.pm and change all of ':' to ';'.
With HTMLUnit version 0.15 I made changes at these lines below:
Line 78:
return join ';', map { "$jar_path/$_" } qw( # return join ':', map { "$jar_path/$_" } qw(
Line 127:
$custom_jars = join(';', #{$parameters{'jars'}}); # $custom_jars = join(':', #{$parameters{'jars'}});
Line 148:
CLASSPATH => collect_default_jars() . ";" . $custom_jars, # CLASSPATH => collect_default_jars() . ":" . $custom_jars,
And it works like a magic.
(it wouldn't let me comment on an existing answer)
I see your answer about ':' vs ';'. I'll try to include a fix in the next WWW::HtmlUnit release (I am the author of the perl bindings).