I am developing a REST API for a mobile app. The mobile app is expected to have millions of users, and used on daily basis.
I am using the AWS Lambda, API Gateway, Amazon RDS (MySQL) technologies for this. In addition, I am using the CloudFormation file to configure everything.
I noticed that each function here has a cold start time of 3 seconds to 3.8 seconds. This needs to be reduced as much as possible.
HikariCPDataSource
import java.sql.Connection;
import java.sql.SQLException;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class HikariCPDataSource {
private static HikariConfig config = new HikariConfig();
private static HikariDataSource ds;
static {
config.setJdbcUrl("jdbc:mysql://gfgf.ffgfg.us-east-1.rds.amazonaws.com:3306/aaaa");
config.setUsername("admin");
config.setPassword("admin123");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
ds = new HikariDataSource(config);
}
public static Connection getConnection() throws SQLException {
return ds.getConnection();
}
private HikariCPDataSource(){}
}
GetAllAccountTypesLambda
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyResponseEvent;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.peresiaapp.beans.AccountingType;
import java.util.ArrayList;
import java.util.List;
import java.sql.*;
public class GetAllAccountTypesLambda {
ObjectMapper objectMapper = new ObjectMapper();
static final String QUERY = "SELECT * from accounting_type";
static Connection conn = null;
static {
try {
conn = HikariCPDataSource.getConnection();
} catch (SQLException e) {
e.printStackTrace();
}
}
public APIGatewayProxyResponseEvent getAllAccountTypes(APIGatewayProxyResponseEvent request)
throws JsonProcessingException, ClassNotFoundException {
List<AccountingType> list = new ArrayList<>();
AccountingType acc = new AccountingType();
try (Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(QUERY);) {
// Extract data from result set
while (rs.next()) {
// Retrieve by column name
acc.setIdaccountingType(rs.getInt("idaccounting_Type"));
acc.setType(rs.getString("type"));
list.add(acc);
}
} catch (SQLException e) {
e.printStackTrace();
}
String writeValueAsString = objectMapper.writeValueAsString(list);
return new APIGatewayProxyResponseEvent().withStatusCode(200).withBody(writeValueAsString);
}
}
template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
aaaa-restapi
Sample SAM Template for aaaa-restapi
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 100
Resources:
GetAllAccountTypesLambda:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: aaaa-restapi
Handler: com.peresiaapp.dao.accountingtype.GetAllAccountTypesLambda::getAllAccountTypes
Runtime: java11
MemorySize: 1024
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /accounttype
Method: get
Role: !GetAtt LambdaRole.Arn
VpcConfig:
SecurityGroupIds:
- sg-041f2459dcd921e8e
SubnetIds:
- subnet-0381dfdfd
- subnet-c4ddf54cb
GetAllRolesLambda:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: aaaa-restapi
Handler: com.peresiaapp.dao.accountingtype.GetAllRolesLambda::getAllRoles
Runtime: java11
MemorySize: 1024
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /roles
Method: get
Role: !GetAtt LambdaRole.Arn
VpcConfig:
SecurityGroupIds:
- sg-041f2459dcd921e8e
SubnetIds:
- subnet-0381sds2d
- subnet-c4d5sdsb
LambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: root
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:DeleteNetworkInterface
- ec2:DescribeInstances
- ec2:AttachNetworkInterface
Resource: '*'
*UPDATE
Couple of comments suggests me of provisioned concurrency. I did try. Did not see much of a difference. However, if any of you can explain what is below that is 900 available is, that would be great. I have hundreds of functions, does this also mean I have to spend a HUGE amount of money if I turned on concurrency? Because figures in pricing page seems to be different and that is okay with me - https://aws.amazon.com/lambda/pricing/
Enable tiered compilation to stop at level 1
Test various memory settings to find the most optimal. CPU increases with memory allocated and you are most likely constrained on CPU - AWS Lambda Power Tuning
Related
I have a very simple AWS Lambda Method as below
public class App
{
public App(){}
public void truckTracker(String lang, String lon)
{
System.out.println("Lat: "+lang+ " Long: "+lon);
}
}
Here is my template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
secondlambda
Sample SAM Template for secondlambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 20
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: HelloWorldFunction
Handler: helloworld.App::truckTracker
Runtime: java11
MemorySize: 512
Below is how I passed data into the method, using events.json and the AWS console.
{
"lang":"102020203456151",
"lon":"3000300303030"
}
Unfortunately I always end up with the following error.
{"errorMessage":"No public method named truckTracker with appropriate method signature found on class helloworld.App"}
How can I fix this?
I download a fresh 6.1 broadleaf-commerce and run my local machine via java -javaagent:./admin/target/agents/spring-instrument.jar -jar admin/target/admin.jar successfully on mine macbook. But in my centos 7 I run sudo java -javaagent:./admin/target/agents/spring-instrument.jar -jar admin/target/admin.jar with following error
2020-10-12 13:20:10.838 INFO 2481 --- [ main] c.b.solr.autoconfigure.SolrServer : Syncing solr config file: jar:file:/home/mynewuser/seafood-broadleaf/admin/target/admin.jar!/BOOT-INF/lib/broadleaf-boot-starter-solr-2.2.1-GA.jar!/solr/standalone/solrhome/configsets/fulfillment_order/conf/solrconfig.xml to: /tmp/solr-7.7.2/solr-7.7.2/server/solr/configsets/fulfillment_order/conf/solrconfig.xml
*** [WARN] *** Your Max Processes Limit is currently 62383.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
WARNING: Starting Solr as the root user is a security risk and not considered best practice. Exiting.
Please consult the Reference Guide. To override this check, start with argument '-force'
2020-10-12 13:20:11.021 ERROR 2481 --- [ main] c.b.solr.autoconfigure.SolrServer : Problem starting Solr
Here is the source code of solr configuration, I believe it is the place to change the configuration to run with the argument -force in programming way.
package com.community.core.config;
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.broadleafcommerce.core.search.service.SearchService;
import org.broadleafcommerce.core.search.service.solr.SolrConfiguration;
import org.broadleafcommerce.core.search.service.solr.SolrSearchServiceImpl;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
/**
*
*
* #author Phillip Verheyden (phillipuniverse)
*/
#Component
public class ApplicationSolrConfiguration {
#Value("${solr.url.primary}")
protected String primaryCatalogSolrUrl;
#Value("${solr.url.reindex}")
protected String reindexCatalogSolrUrl;
#Value("${solr.url.admin}")
protected String adminCatalogSolrUrl;
#Bean
public SolrClient primaryCatalogSolrClient() {
return new HttpSolrClient.Builder(primaryCatalogSolrUrl).build();
}
#Bean
public SolrClient reindexCatalogSolrClient() {
return new HttpSolrClient.Builder(reindexCatalogSolrUrl).build();
}
#Bean
public SolrClient adminCatalogSolrClient() {
return new HttpSolrClient.Builder(adminCatalogSolrUrl).build();
}
#Bean
public SolrConfiguration blCatalogSolrConfiguration() throws IllegalStateException {
return new SolrConfiguration(primaryCatalogSolrClient(), reindexCatalogSolrClient(), adminCatalogSolrClient());
}
#Bean
protected SearchService blSearchService() {
return new SolrSearchServiceImpl();
}
}
Let me preface this by saying you would be better off simply not starting the application as root. If you are in Docker, you can use the USER command to switch to a non-root user.
The Solr server startup in Broadleaf Community is done programmatically via the broadleaf-boot-starter-solr dependency. This is the wrapper around Solr that ties it to the Spring lifecycle. All of the real magic happens in the com.broadleafcommerce.solr.autoconfigure.SolrServer class.
In that class, you will see a startSolr() method. This method is what adds startup arguments to Solr.
In your case, you will need to mostly copy this method wholesale and use cmdLine.addArgument(...) to add additional arguments. Example:
class ForceStartupSolrServer extends SolrServer {
public ForceStartupSolrServer(SolrProperties props) {
super(props);
}
protected void startSolr() {
if (!isRunning()) {
if (!downloadSolrIfApplicable()) {
throw new IllegalStateException("Could not download or expand Solr, see previous logs for more information");
}
stopSolr();
synchConfig();
{
CommandLine cmdLine = new CommandLine(getSolrCommand());
cmdLine.addArgument("start");
cmdLine.addArgument("-p");
cmdLine.addArgument(Integer.toString(props.getPort()));
// START MODIFICATION
cmdLine.addArgument("-force");
// END MODIFICATION
Executor executor = new DefaultExecutor();
PumpStreamHandler streamHandler = new PumpStreamHandler(System.out);
streamHandler.setStopTimeout(1000);
executor.setStreamHandler(streamHandler);
try {
executor.execute(cmdLine);
created = true;
checkCoreStatus();
} catch (IOException e) {
LOG.error("Problem starting Solr", e);
}
}
}
}
}
Then create an #Configuration class to override the blAutoSolrServer bean created by SolrAutoConfiguration (note the specific package requirement for org.broadleafoverrides.config):
package org.broadleafoverrides.config;
public class OverrideConfiguration {
#Bean
public ForceStartupSolrServer blAutoSolrServer(SolrProperties props) {
return new ForceStartupSolrServer(props);
}
}
I'm trying to add a custom LogAppender to be used in drop-wizard. From my understanding, you add an appender:
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.core.Appender;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeName;
import com.google.cloud.logging.logback.LoggingAppender;
import io.dropwizard.logging.AbstractAppenderFactory;
import io.dropwizard.logging.async.AsyncAppenderFactory;
import io.dropwizard.logging.filter.LevelFilterFactory;
import io.dropwizard.logging.layout.LayoutFactory;
#JsonTypeName("stack-driver-console")
public class StackDriverAppenderFactory extends AbstractAppenderFactory {
private String appenderName = "CLOUD";
private boolean includeContextName = true;
#JsonProperty
public String getName() {
return this.appenderName;
}
#JsonProperty
public void setName(String name) {
this.appenderName = name;
}
#Override
public Appender build(
LoggerContext loggerContext,
String s,
LayoutFactory layoutFactory,
LevelFilterFactory levelFilterFactory,
AsyncAppenderFactory asyncAppenderFactory) {
setNeverBlock(true);
setTimeZone("utc");
setLogFormat("%-6level [%d{HH:mm:ss.SSS}] [%t] %logger{5} - %X{code} %msg %n");
LoggingAppender cloudAppender = new LoggingAppender();
cloudAppender.addEnhancer("com.google.cloud.logging.TraceLoggingEnhancer");
cloudAppender.addFilter(levelFilterFactory.build(Level.toLevel(getThreshold())));
cloudAppender.setFlushLevel(Level.WARN);
cloudAppender.setLog("application.log");
cloudAppender.setName(appenderName);
cloudAppender.setContext(loggerContext);
cloudAppender.start();
return wrapAsync(cloudAppender, asyncAppenderFactory);
}
}
then in the /main/resources/META-INF/services/io.dropwizard.logging.AppenderFactory
io.dropwizard.logging.ConsoleAppenderFactory
io.dropwizard.logging.FileAppenderFactory
io.dropwizard.logging.SyslogAppenderFactory
com.example.logger.StackDriverAppenderFactory
Dropwizard version: compile 'io.dropwizard:dropwizard-core:1.3.16'
config.yaml
# Logging settings.
logging:
level: DEBUG
appenders:
- type: stack-driver-console
threshold: INFO
# use the simple server factory if you only want to run on a single port
server:
applicationConnectors:
- type: http
port: 8080
adminConnectors:
- type: http
port: 8081
# the only required property is resourcePackage, for more config options see below
swagger:
resourcePackage: com.example.resources
When I put a break point in io.dropwizard.configuration.BasicConfigurationFactory:build() line 127,
final T config = mapper.readValue(new TreeTraversingParser(node), klass);
where the node is defined as:
{
"logging":{
"level":"DEBUG",
"appenders":[
{
"type":"stack-driver-console",
"threshold":"INFO"
}
]
},
"server":{
"applicationConnectors":[
{
"type":"http",
"port":8080
}
],
"adminConnectors":[
{
"type":"http",
"port":8081
}
]
},
"swagger":{
"resourcePackage":"com.example.resources"
}
}
throws an exception
com.fasterxml.jackson.databind.exc.InvalidTypeIdException: Could not resolve type id 'stack-driver-console' as a subtype of [simple type, class io.dropwizard.logging.AppenderFactory<ch.qos.logback.classic.spi.ILoggingEvent>]: known type ids = [console, file, syslog, tcp, udp] (for POJO property 'appenders')
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: com.example.App["logging"]->io.dropwizard.logging.DefaultLoggingFactory["appenders"]->java.util.ArrayList[0])
It's not clear to my why dropwizard is having a hard time identifying the custom type. This use to work, has something changed?
I followed this example as well "https://gist.github.com/ajmath/e9f90c29cd224653c218"
I have a simple java program trying to connect to the local instance of neo4j to create a sample database to play with but I keep getting;
Exception: Error starting org.neo4j.kernel.impl.factory.CommunityFacadeFactory, /home/mleonard/mbig/neodb/testeroo
Trace:[Ljava.lang.StackTraceElement;#7ce7d377
The stack trace is useless and the error is also not overly helpful.
The server is up and running
Starting Neo4j Server console-mode...
2015-08-11 15:28:06.466-0400 INFO Setting startup timeout to 120000ms
2015-08-11 15:28:23.311-0400 INFO Successfully started database
2015-08-11 15:28:25.920-0400 INFO Starting HTTP on port 7474 (24 threads available)
2015-08-11 15:28:26.972-0400 INFO Enabling HTTPS on port 7473
2015-08-11 15:28:28.247-0400 INFO Mounting static content at /webadmin
2015-08-11 15:28:28.704-0400 INFO Mounting static content at /browser
2015-08-11 15:28:30.321-0400 ERROR The class org.neo4j.server.rest.web.CollectUserAgentFilter is not assignable to the class com.sun.jersey.spi.container.ContainerRequestFilter. This class is ignored.
2015-08-11 15:28:31.677-0400 ERROR The class org.neo4j.server.rest.web.CollectUserAgentFilter is not assignable to the class com.sun.jersey.spi.container.ContainerRequestFilter. This class is ignored.
2015-08-11 15:28:32.165-0400 ERROR The class org.neo4j.server.rest.web.CollectUserAgentFilter is not assignable to the class com.sun.jersey.spi.container.ContainerRequestFilter. This class is ignored.
2015-08-11 15:28:33.031-0400 INFO Remote interface ready and available at http://localhost:7474/
And I can see that the database is created as the files and folders for the new database are created.
import java.io.File;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.RelationshipType;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
public class Main {
private static enum RelTypes implements RelationshipType
{
WORKS,
FRIENDS,
NEMISIS
}
public static void main(String[] args)
{
System.out.println("Starting ARM4J");
GraphDatabaseService db = null ;
try
{
db = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(new File("/home/mleonard/mbig/neodb/testeroo"))
.loadPropertiesFromFile("/home/mleonard/mbig/neo4j-community-2.3.0-M02/conf/neo4j.properties")
.newGraphDatabase();
/*
try( Transaction tx = db.beginTx() )
{
Node matty = db.createNode();
matty.setProperty("name", "Matthew");
matty.setProperty("age", "31");
matty.setProperty("sex", "male");
Node jay = db.createNode();
jay.setProperty("name", "Jay");;
jay.setProperty("age", "35");
jay.setProperty("sex", "male");
org.neo4j.graphdb.Relationship r = matty.createRelationshipTo(jay, RelTypes.WORKS);
r.setProperty("years", "2");
tx.success();
}
*/
}
catch(Exception x)
{
System.out.println("Exception: " + x.getMessage());
System.out.println("Trace:" + x.getStackTrace().toString());
}
finally
{
if( db != null )
{
db.shutdown();
}
}
System.out.println("Stopping ARM4J");
}
}
I am using the Matrikon OPC Server for Simulation and Testing, instead of TOPServer, along with the tutorial HowToStartWithUtgard. I am not able to connect to the server. This is the error that I get:
15:02:18.452 [main] DEBUG o.j.dcom.transport.JIComTransport - Socket closed... Socket[unconnected] host XXX.XXX.XX.X, port 135
15:02:18.453 [main] WARN org.jinterop.dcom.core.JIComServer - Got the class not registered exception , will attempt setting entries based on status flags...
15:02:18.468 [main] INFO org.openscada.opc.lib.da.Server - Failed to connect to server
org.jinterop.dcom.common.JIException: Class not registered. If you are using a DLL/OCX , please make sure it has "DllSurrogate" flag set. Faq A(6) in readme.html. [0x80040154]
at org.jinterop.dcom.core.JIComServer.init(Unknown Source) ~[org.openscada.jinterop.core_2.0.8.201303051454.jar:na]
at org.jinterop.dcom.core.JIComServer.initialise(Unknown Source) ~[org.openscada.jinterop.core_2.0.8.201303051454.jar:na]
at org.jinterop.dcom.core.JIComServer.<init>(Unknown Source) ~[org.openscada.jinterop.core_2.0.8.201303051454.jar:na]
at org.openscada.opc.lib.da.Server.connect(Server.java:117) ~[org.openscada.opc.lib_1.0.0.201303051455.jar:na]
at com.matrikonopc.utgard.tutorial.UtgardReadTutorial.main(UtgardReadTutorial.java:31) [bin/:na]
Caused by: org.jinterop.dcom.common.JIRuntimeException: Class not registered. If you are using a DLL/OCX , please make sure it has "DllSurrogate" flag set. Faq A(6) in readme.html. [0x80040154]
at org.jinterop.dcom.core.JIRemActivation.read(Unknown Source) ~[org.openscada.jinterop.core_2.0.8.201303051454.jar:na]
at ndr.NdrObject.decode(Unknown Source) ~[org.openscada.jinterop.deps_1.0.0.201303051454.jar:na]
at rpc.ConnectionOrientedEndpoint.call(Unknown Source) ~[org.openscada.jinterop.deps_1.0.0.201303051454.jar:na]
at rpc.Stub.call(Unknown Source) ~[org.openscada.jinterop.deps_1.0.0.201303051454.jar:na]
... 5 common frames omitted
15:02:18.469 [main] INFO org.openscada.opc.lib.da.Server - Destroying DCOM session...
15:02:18.470 [main] INFO org.openscada.opc.lib.da.Server - Destroying DCOM session... forked
80040154: Unknown error (80040154)
15:02:18.499 [OPCSessionDestructor] DEBUG org.openscada.opc.lib.da.Server - Starting destruction of DCOM session
15:02:18.500 [OPCSessionDestructor] INFO org.jinterop.dcom.core.JISession - About to destroy 0 sessesion which are linked to this session: 1325311425
15:02:18.500 [OPCSessionDestructor] INFO o.j.dcom.core.JIComOxidRuntime - destroySessionOIDs for session: 1325311425
15:02:18.500 [OPCSessionDestructor] INFO org.openscada.opc.lib.da.Server - Destructed DCOM session
15:02:18.501 [OPCSessionDestructor] INFO org.openscada.opc.lib.da.Server - Session destruction took 27 ms
I do not know where I should register the Class and what Class it refers to.
It is referring to the clsid you're attempting to use -- it is not in the registry. Can you double check that you're using the correct one for Matrikon OPC Simulation Server?
Working demo, tested on Windows 10 and Java 8.
User must have administrator rights on Windows.
Errors that might occur:
00000005: Login error (does the user has administrator rights !?)
8001FFFF: Firewall, RPC dynamic ports are not open (see below)
80040154: Double check CLSID in registry, below HKEY_CLASSES_ROOT
Firewall rules
netsh advfirewall firewall add rule^
name="DCOM-dynamic"^
dir=in^
action=allow^
protocol=TCP^
localport=RPC^
remoteport=49152-65535
rem the next one does not seems needed
netsh advfirewall firewall add rule name="DCOM" dir=in action=allow protocol=TCP localport=135
Java code
package demo.opc;
import java.util.concurrent.Executors;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.openscada.opc.lib.common.ConnectionInformation;
import org.openscada.opc.lib.da.AccessBase;
import org.openscada.opc.lib.da.DataCallback;
import org.openscada.opc.lib.da.Item;
import org.openscada.opc.lib.da.ItemState;
import org.openscada.opc.lib.da.Server;
import org.openscada.opc.lib.da.SyncAccess;
public class UtgardReaderDemo {
/**
* Main application, arguments are provided as system properties, e.g.<br>
* java -Dhost="localhost" -Duser="admin" -Dpassword="secret" -jar demo.opc.jar<br>
* Tested with a windows user having administrator rights<br>
* #param args unused
* #throws Exception in case of unexpected error
*/
public static void main(String[] args) throws Exception {
Logger.getLogger("org.jinterop").setLevel(Level.ALL); // Quiet => Level.OFF
final String host = System.getProperty("host", "localhost");
final String user = System.getProperty("user", System.getProperty("user.name"));
final String password = System.getProperty("password");
// Powershell: Get-ItemPropertyValue 'Registry::HKCR\Matrikon.OPC.Simulation.1\CLSID' '(default)'
final String clsId = System.getProperty("clsId", "F8582CF2-88FB-11D0-B850-00C0F0104305");
final String itemId = System.getProperty("itemId", "Saw-toothed Waves.Int2");
final ConnectionInformation ci = new ConnectionInformation(user, password);
ci.setHost(host);
ci.setClsid(clsId);
final Server server = new Server(ci, Executors.newSingleThreadScheduledExecutor());
server.connect();
final AccessBase access = new SyncAccess(server, 1000);
access.addItem(itemId, new DataCallback() {
public void changed(final Item item, final ItemState state) {
System.out.println(state);
}
});
access.bind();
Thread.sleep(10_000L);
access.unbind();
}
}
build.gradle
plugins {
id 'java-library'
id 'eclipse'
}
repositories {
mavenCentral()
}
dependencies {
implementation 'org.bouncycastle:bcprov-jdk15on:1.60'
implementation 'org.openscada.utgard:org.openscada.opc.lib:1.5.0'
}
jar {
manifest {
attributes(
'Class-Path': configurations.runtimeClasspath.collect { 'lib/' + it.getName() }.join(' '),
'Main-Class': 'demo.opc.UtgardReaderDemo'
)
}
}
assemble {
dependsOn 'dependenciesCopy'
}
task dependenciesCopy(type: Copy) {
group 'dependencies'
from sourceSets.main.compileClasspath
into "$libsDir/lib"
}