i am testing the framework djl https://djl.ai/
When i try to load the model : ModelZoo.loadModel(criteria)
I am getting this error :
java.util.ServiceConfigurationError: ai.djl.repository.zoo.ZooProvider: Provider ai.djl.pytorch.zoo.PtZooProvider not found
Criteria<BufferedImage, DetectedObjects> criteria =
Criteria.builder()
.optApplication(Application.CV.OBJECT_DETECTION)
.setTypes(BufferedImage.class, DetectedObjects.class)
.optFilter("size", "512")
.optFilter("backbone", "resnet50")
.optFilter("flavor", "v1")
.optFilter("dataset", "voc")
.optProgress(new ProgressBar())
.build();
I finally find a solution. I just add this to my maven pom.xml file
<dependency>
<groupId>ai.djl.pytorch</groupId>
<artifactId>pytorch-native-auto</artifactId>
<version>1.4.0-SNAPSHOT</version>
</dependency>
It seems you are using master branch. The latest master branch should have fixed this issue. PyTorch support is not officially supported (will be released by early April). You can try stable release by checkout v0.3.0.
Related
When I'm using below code getting warning that it's deprecated
renderRequest.getParameter("param")
Then I used
renderRequest.getRenderParameters().getValue("param")
as this was suggested by v3.0. But after that getting bellow error at run time
2020-11-19 02:58:02.537 ERROR [http-nio-8080-exec-6][render_portlet_jsp:131] null
java.lang.UnsupportedOperationException: Requires 3.0 opt-in
at com.liferay.portlet.internal.PortletRequestImpl.getRenderParameters(PortletRequestImpl.java:520)
at com.training.portlet.MyPortlet5.doView(MyPortlet5.java:61)
at com.liferay.portal.kernel.portlet.LiferayPortlet.doDispatch(LiferayPortlet.java:303)
at com.liferay.portal.kernel.portlet.bridges.mvc.MVCPortlet.doDispatch(MVCPortlet.java:478)
at javax.portlet.GenericPortlet.render(GenericPortlet.java:291)
at com.liferay.portal.kernel.portlet.bridges.mvc.MVCPortlet.render(MVCPortlet.java:302)
at com.liferay.portlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:127)
at com.liferay.portlet.ScriptDataPortletFilter.doFilter(ScriptDataPortletFilter.java:58)
at com.liferay.portlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:124)
at com.liferay.portal.kernel.portlet.PortletFilterUtil.doFilter(PortletFilterUtil.java:71)
at com.liferay.portal.kernel.servlet.PortletServlet.service(PortletServlet.java:115)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.eclipse.equinox.http.servlet.internal.registration.EndpointRegistration.service(EndpointRegistration.java:153)
at org.eclipse.equinox.http.servlet.internal.servlet.ResponseStateHandler.processRequest(ResponseStateHandler.java:62)
at org.eclipse.equinox.http.servlet.internal.context.DispatchTargets.doDispatch(DispatchTargets.java:120)
at org.eclipse.equinox.http.servlet.internal.servlet.RequestDispatcherAdaptor.include(RequestDispatcherAdaptor.java:48)
at com.liferay.portlet.internal.InvokerPortletImpl.invoke(InvokerPortletImpl.java:573)
at com.liferay.portlet.internal.InvokerPortletImpl.invokeRender(InvokerPortletImpl.java:670)
at com.liferay.portlet.internal.InvokerPortletImpl.render(InvokerPortletImpl.java:362)
at
This issue was because my portlet module was using portlet API v2.0 by default and getRenderParameters() is available since v3.0. So, when explicitly I mentioned in component class to use v3.0 and did Gradle refresh the issue got fix.
I could fix this issue by adding
"javax.portlet.version=3.0"
in portlet's component class.
#Component(
immediate = true,
property = {
"javax.portlet.version=3.0",
"com.liferay.portlet.display-category=category.app",
"com.liferay.portlet.instanceable=true",
"javax.portlet.init-param.template-path=/",
"javax.portlet.resource-bundle=content.Language",
"javax.portlet.security-role-ref=power-user,user",
"javax.portlet.init-param.view-template=/myportlet5/view.jsp",
"javax.portlet.name=" + MyModuleMainPortletKeys.MY_PORTLET5
},
service = Portlet.class
While running jHipster command, I got the following errors:
+ jhipster axon --skip-git --blueprint cst
INFO! Using JHipster version installed globally
INFO! No custom sharedOptions found within blueprint: generator-jhipster-cst at /usr/local/lib/node_modules/generator-jhipster-cst
events.js:288
throw er; // Unhandled 'error' event
^
TypeError: Cannot read property 'replace' of undefined
at new module.exports (/Users/.../jhipster/generator-jhipster-cst/generators/subgenerator-base.js:27:49)
at new module.exports (/Users/.../jhipster/generator-jhipster-cst/generators/aws/index.js:3:18)
at Environment.instantiate (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:673:23)
at Environment.create (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:645:19)
at /usr/local/lib/node_modules/generator-jhipster/cli/cli.js:74:31
at Array.forEach (<anonymous>)
at Object.<anonymous> (/usr/local/lib/node_modules/generator-jhipster/cli/cli.js:62:29)
at Module._compile (internal/modules/cjs/loader.js:1158:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)
at Module.load (internal/modules/cjs/loader.js:1002:32)
Emitted 'error' event on Environment instance at:
at Environment.error (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:293:12)
at Environment.create (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:647:19)
at /usr/local/lib/node_modules/generator-jhipster/cli/cli.js:74:31
[... lines matching original stack trace ...]
at Module.load (internal/modules/cjs/loader.js:1002:32)
at Function.Module._load (internal/modules/cjs/loader.js:901:14)
at Module.require (internal/modules/cjs/loader.js:1044:19)
I tried to update npm and jHipster, but there was another problem with upgrading jHipster:
~ sudo jhipster upgrade
INFO! Using JHipster version installed globally
INFO! Executing jhipster:upgrade
This seems to be an app blueprinted project with jhipster 6.6.0 bug (https://github.com/jhipster/generator-jhipster/issues/11045), you should pass --blueprints to jhipster upgrade commmand.
Error: This seems to be an app blueprinted project with jhipster 6.6.0 bug (https://github.com/jhipster/generator-jhipster/issues/11045), you should pass --blueprints to jhipster upgrade commmand.
at module.exports.error (/usr/local/lib/node_modules/generator-jhipster/generators/generator-base.js:1590:15)
at new module.exports (/usr/local/lib/node_modules/generator-jhipster/generators/upgrade/index.js:95:18)
at Environment.instantiate (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:673:23)
at Environment.create (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:645:19)
at instantiateAndRun (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:729:30)
at Environment.run (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:758:12)
at runYoCommand (/usr/local/lib/node_modules/generator-jhipster/cli/cli.js:53:13)
at Command.<anonymous> (/usr/local/lib/node_modules/generator-jhipster/cli/cli.js:178:17)
at Command.listener [as _actionHandler] (/usr/local/lib/node_modules/generator-jhipster/node_modules/commander/index.js:413:31)
at Command._parseCommand (/usr/local/lib/node_modules/generator-jhipster/node_modules/commander/index.js:914:14)
NPM: 6.14.8
Node: 12.16.1
jhipster: 6.10.3
Java: Tested with 13.0.2 & 11.0.8
Updated
The part of the code from which the error originated ( 'replace' of undefined ):
const configuration = {
...opts,
...this.getAllJhipsterConfig(this, true)
};
this.baseName = configuration.baseName;
this.serverPort = configuration.serverPort;
this.packageName = configuration.packageName;
this.rootPackageName = this.packageName.replace(/\.[^.]+$/, '');
Could you explain to me, how can I fix the above problem, please?
This question is about java library of Siddhi - CEP
Description:
I tried to establish an HTTP source to receive data. There was no error creating the Runtime and starting it.
[nioEventLoopGroup-2-1] INFO org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector - HTTP(S) Interface starting on host localhost and port 9056
[main] INFO org.wso2.extension.siddhi.io.http.source.HttpConnectorPortBindingListener - siddhi: started HTTP server connector localhost:9056
[main] INFO org.wso2.extension.siddhi.io.http.source.HttpSourceListener - Source Listener has created for url http://localhost:9056/endpoints/
However, when I send a POST request to the designated address. I get an error:
[nioEventLoopGroup-3-1] ERROR org.wso2.extension.siddhi.io.http.source.HTTPConnectorListener - Error in http server connector
java.lang.NoSuchMethodError: io.netty.handler.codec.http.HttpRequest.method()Lio/netty/handler/codec/http/HttpMethod;
at org.wso2.transport.http.netty.listener.CustomHttpContentCompressor.decode(CustomHttpContentCompressor.java:44)
at org.wso2.transport.http.netty.listener.CustomHttpContentCompressor.decode(CustomHttpContentCompressor.java:14)
at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
Could anyone suggest a reason of what I have done wrong? Thank you in advance.
Affected Product Version:
4.1.17
OS, DB, other environment details and versions:
IntelliJ IDEA 2017.3.5 (Community Edition)
Build #IC-173.4674.33, built on March 6, 2018
JRE: 1.8.0_152-release-1024-b15 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Windows 10 10.0
Steps to reproduce:
The test code I wrote:
import org.wso2.siddhi.core.SiddhiAppRuntime;
import org.wso2.siddhi.core.SiddhiManager;
import org.wso2.siddhi.core.event.Event;
import org.wso2.siddhi.core.stream.output.StreamCallback;
import org.wso2.siddhi.core.util.EventPrinter;
//import org.wso2.extension.siddhi.io.http.source.*;
public class httpTest
{
public static void main(String[] args) {
String siddhiString = "#App:name(\"haha\") " +
"#App:description(\"fasd\") " +
"#App:statistics(reporter = \"jmx\", interval = \"30\") " +
"#source(type=\"http\",receiver.url=\"http://localhost:9056/endpoints/\",#map(type=\"text\",fail.on.missing.attribute=\"true\",regex.A=\"(.*)\",#attributes(data=\"A\"))) " +
"#sink(type=\"mqtt\",url=\"tcp://120.78.71.179:1883\",topic=\"34\",#map(type=\"text\")) " +
"define stream a4P068X5YCK(data String);";
SiddhiManager siddhiManager = new SiddhiManager();
SiddhiAppRuntime siddhiAppRuntime = siddhiManager.createSiddhiAppRuntime(siddhiString);
siddhiAppRuntime.addCallback("a4P068X5YCK", new StreamCallback() {
#Override
public void receive(Event[] events) {
EventPrinter.print(events);
}
});
siddhiAppRuntime.start();
}
}
Then I send a POST request to http://localhost:9056/endpoints/. It returns the exception posted above.
Update:
I went back and check the Siddhi-io-http github documentation page. I found that it says:
... This extension only works inside the WSO2 Data Analytic Server and cannot be run with standalone siddhi.
I guess it might suggest that http is not supported by siddhi library at the moment. I have submitted issue on siddhi repository page to ask for confirmation.
Update 2:
I have changed my Siddhi Query so that it copy the source stream into the other sink stream. Other part of the code remains the same:
String siddhiString = "#App:name(\"haha\") " +
"#App:description(\"fasd\") " +
"#App:statistics(reporter = \"jmx\", interval = \"30\") " +
"#source(type=\"http\",receiver.url=\"http://localhost:9056/endpoints/\",#map(type=\"text\",fail.on.missing.attribute=\"true\",regex.A=\"(.*)\",#attributes(data=\"A\"))) " +
"define stream a4P068X5YCK(data String); " +
"#sink(type=\"mqtt\",url=\"tcp://120.78.71.179:1883\",topic=\"34\",#map(type=\"text\")) " +
"define stream pout(data String); " +
"from a4P068X5YCK " +
"select * " +
"insert into pout; " +
"";
The same problem still exists. I tried the wso2 processor and it works fine. Now my guesses are:
1. version mismatch
2. lack of some packages in wso2 processor dependecies.
I will try to identify it in those two direction and will update in here and Issue page as soon as I find something new.
Update 3:
As I keep adding updates, the format seems to have some problem but fortunately this issue also comes to an end. I tried to Include all dependencies from wso2 processor source code and my test program starts working. Therefore I assume there is a component in wso2 processor that siddhi library is lacking.
I tried to delete the dependencies one by one to see if my test program still works. Finally I have found that package. With this package my code works well.
<dependency>
<groupId>org.wso2.msf4j</groupId>
<artifactId>org.wso2.msf4j.feature</artifactId>
<version>${msf4j.version}</version>
<type>zip</type>
</dependency>
As I am a beginner to coding, I am not exactly what was the problem. I would be grateful if someone could explain to me the reason behind the problem. I appreciate all the helps received in this process and it would also be a great experience for me.
Update 4: #Grainier I tried the sample code you posted and it actually worked! Although I still have no idea why. I tried to copy your exact code to a new .java in my project. It still won't work. Therefore I guess there is something to do with POM file.
Something I noticed is that when I ran your sample code there are few more WARNINGS printed in console: SMALL UPDATE: I have found that the Warnings appeared because I am using JDK 10. As soon as I switch back to 1.8 warnings disappeared and the code still works. So maybe this is not the reason.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/C:/Users/ktz001/.m2/repository/io/netty/netty-common/4.1.16.Final/netty-common-4.1.16.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
The second difference is in the POM file. In you have one more repository added compared to mine.
<repository>
<id>wso2-nexus</id>
<name>WSO2 internal Repository</name>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
It would be great if you could suggest any reason.
Thank you for all of your work! It has been really helpful.
There seems to be an issue with the documentation... This should work with standalone Siddhi. All you have to do is add following dependencies in your project (also mqtt, which I haven't included below);
<dependencies>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-core</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-annotations</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-query-compiler</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.extension.siddhi.io.http</groupId>
<artifactId>siddhi-io-http</artifactId>
<version>${siddhi.io.http.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.extension.siddhi.map.text</groupId>
<artifactId>siddhi-map-text</artifactId>
<version>${siddhi.mapper.text.version}</version>
</dependency>
</dependencies>
However, there's an issue with your query which is, you have defined a #source and a #sink to a single stream. Which is wrong. If you want to make it a passthrough, then you have to define two streams (one for source and one for sink) and write a query to insert events from source stream to sink stream.
UPDATE:
A sample can be found here; Please try that and see whether it's working.
I recently upgraded a cluster from 3.7.2 to 3.9.2; shutting down all the boxes.
I'm using Portable Serialization and the current version number in the config file is 6. Yet, after a cold start, the cluster indicates a incompatible class definition.
I've restarted the cluster several times since, yet the same error remains.
How is it possible for the system to remain out of sync for some fields in some classes, yet the version for the class was upgraded?
Log:
Caused by:
com.hazelcast.nio.serialization.HazelcastSerializationException:
Incompatible class-definitions with same class-id:
ClassDefinition{factoryId=1, classId=8, version=6, fieldDefinitions=[
FieldDefinitionImpl{index=0, fieldName='feature', type=UTF, classId=0, factoryId=0, version=6},
FieldDefinitionImpl{index=1, fieldName='value', type=BOOLEAN, classId=0, factoryId=0, version=6}]}
VS
ClassDefinition{factoryId=1, classId=8, version=6, fieldDefinitions=[
FieldDefinitionImpl{index=0, fieldName='feature', type=UTF, classId=0, factoryId=0, version=0},
FieldDefinitionImpl{index=1, fieldName='value', type=BOOLEAN, classId=0, factoryId=0, version=0}]}
Config:
<serialization>
<portable-version>6</portable-version>
<portable-factories>
<portable-factory factory-id="1">
com.MyPortableFactory
</portable-factory>
</portable-factories>
</serialization>
It is a bug in Hazelcast library that got fixed in 3.9.4 and 3.10+.
Issue:
https://github.com/hazelcast/hazelcast/issues/12733
Fix for 3.10:
https://github.com/hazelcast/hazelcast/pull/12734
Fix for 3.9.4:
https://github.com/hazelcast/hazelcast/pull/12735
We have 10 Cassandra nodes in production running Cassandra-2.1.8. We recently upgraded to 2.1.8 version. Previously we were using only 3 nodes running Cassandra-2.1.2. First we upgraded the initial 3 nodes from 2.1.2 to 2.1.8 (following the procedure as described in Upgrading Cassandra). Then we added 7 more nodes running Cassandra-2.1.8 in cluster. Then we started our client programs. For first few hours everything worked fine, but after few hours, we saw some errors in client program logs like
Thread-0 [29/07/15 17:41:23.356] ERROR com.cleartrail.entityprofiling.engine.InterpretationWriter - Error:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.cleartrail.entityprofiling.engine.InterpretationWriter.WriteInterpretation(InterpretationWriter.java:430)
at com.cleartrail.entityprofiling.engine.Profiler.buildProfile(Profiler.java:1042)
at com.cleartrail.messageconsumer.consumer.KafkaConsumer.run(KafkaConsumer.java:336)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Now, I double checked the Firewall (as suggested in few posts), ports, timeouts in client as well as nodes and they all are correct.
I am also not closing the connection anywhere in between. I am using batch queries with batch size of 1000 and the queries are update queries updating counters in my table with three columns
entity , twfwv , cvalue
where entity and twfwv columns are text and primary key and cvalue is counter column.
I even restarted all my nodes (because this trick helped me in my dev environment when I faced the same exception) but its not helping. Please suggest what can be the probable problem here.
My issue was resolved by checking the errors collection of NoHostAvailableException as advised by Olivier Michallat in the comments. For me it was the protocol version on the cluster configuration. Mine was null, setting it to 3 fixed the problem.
My issue was resolved by removing/using a property to set or unset the custom load balancing TokenAwarePolicy my connection was using, and relying on the default.
Specifically, I was trying to get a local spring boot app talking to a single dockerized Cassandra instance.
Cluster.Builder builder = Cluster.builder()
.addContactPoints(cassandraProperties.getHosts())
.withPort(cassandraProperties.getPort())
.withProtocolVersion(ProtocolVersion.V4)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withCredentials(cassandraProperties.getUsername(), cassandraProperties.getPassword())
.withCodecRegistry(codecRegistry);
if (loadBalanced) {
builder.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().withLocalDc(localDc).build()));
}