There is unknown field "container" in Tekton - java

I'm interested in Tekton these days.
However there are some issue when I implement Task with java fabric8.tekton apis.
There exist api which is adding steps in spec in units of container(withContainer) in TaskBuilder class.
However I got error message in rune time like below,
Can I get some advices?
Tekton version - v0.10.1
I used packages like below:
io.fabric8:kubernetes-client:4.7.1
io.fabric8:tekton-client:4.7.1
Here is my complete test code.
package com.example.tekton;
import java.util.ArrayList;
import java.util.List;
import io.fabric8.kubernetes.api.model.Container;
import io.fabric8.kubernetes.api.model.ContainerBuilder;
import io.fabric8.kubernetes.client.BaseClient;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.tekton.client.TektonClient;
import io.fabric8.tekton.client.DefaultTektonClient;
import io.fabric8.tekton.client.handlers.TaskHandler;
import io.fabric8.tekton.client.handlers.TaskRunHandler;
import io.fabric8.tekton.pipeline.v1alpha1.ArrayOrString;
import io.fabric8.tekton.pipeline.v1alpha1.Task;
import io.fabric8.tekton.pipeline.v1alpha1.TaskBuilder;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRun;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRunBuilder;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRefBuilder;
public class DefaultKubernetesTest {
public Task getTask() {
Container con = new ContainerBuilder()
.withNewImage("ubuntu")
.withNewName("echo-hello-world")
.addNewCommand("echo")
.addNewArg("hello jinwon world")
.build();
Task task = new TaskBuilder()
.withApiVersion("tekton.dev/v1alpha1")
.withKind("Task")
.withNewMetadata()
.withName("echo-hello-world-test")
.endMetadata()
.withNewSpec()
.addNewStep()
.withContainer(con)
.endStep()
.endSpec()
.build();
return task;
}
public TaskRun getTaskRun() {
TaskRun taskRun = new TaskRunBuilder()
.withNewMetadata()
.withName("taskrun")
.endMetadata()
.withNewSpec()
.withTaskRef(new TaskRefBuilder().withName("echo-hello-world-test").withApiVersion("tekton.dev/v1alpha1").withKind("Task").build())
.endSpec().build();
return taskRun;
}
public static void main(String[] args) {
ConfigBuilder config = new ConfigBuilder();
DefaultKubernetesTest kubeTest = new DefaultKubernetesTest();
String username = "testUser";
String password = "testPwd";
config = config.withMasterUrl("https://192.168.6.236:6443");
config = config.withUsername(username);
config = config.withPassword(password);
Config kubeConfig = config.build();
try (DefaultTektonClient test = new DefaultTektonClient(kubeConfig)) {
Task task = kubeTest.getTask();
TaskRun taskRun = kubeTest.getTaskRun();
test.tasks().inNamespace("test").create(task);
test.taskRuns().inNamespace("test").create(taskRun);
test.close();
}
}
}

Tekton ships with an admission controller, which validates the CRD specs before allowing them into the cluster. Because the project is still in alpha, its moving quite fast. Fabric8 may be templating out K8s objects against a different spec from what has been installed on your cluster. You should be able to validate the spec version used in Fabric8 and remove all the Tekton objects in your cluster and re-apply them at a specific version.

Related

How to Rerun a Particular Cucumber Scenario when it Fails with Cucumber

I know how to use two runner classes to rerun failed scenarios, but I want this feature only for one test.
Let's say that I have 100 scenarios and I only want to rerun scenario 40 when it fails, but if any other scenaria fails, I don't want it to rerun. Is there a way to implement this for one test in particular?
To see how to rerun all failed scenarios, check out this question:
How to rerun the failed scenarios using Cucumber?
You'll have to write custom code for this. Fortunately this is relatively easy with the JUnit Platform API (JUnit 5).
https://github.com/cucumber/cucumber-jvm/tree/main/cucumber-junit-platform-engine#rerunning-failed-scenarios
package com.example;
import org.junit.platform.engine.discovery.DiscoverySelectors;
import org.junit.platform.engine.discovery.UniqueIdSelector;
import org.junit.platform.launcher.Launcher;
import org.junit.platform.launcher.LauncherDiscoveryRequest;
import org.junit.platform.launcher.TestIdentifier;
import org.junit.platform.launcher.core.LauncherFactory;
import org.junit.platform.launcher.listeners.SummaryGeneratingListener;
import org.junit.platform.launcher.listeners.TestExecutionSummary.Failure;
import java.util.List;
import java.util.stream.Collectors;
import static org.junit.platform.engine.discovery.DiscoverySelectors.selectDirectory;
import static org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder.request;
public class RunCucumber {
public static void main(String[] args) {
LauncherDiscoveryRequest request = request()
.selectors(
selectDirectory("path/to/features")
)
.build();
Launcher launcher = LauncherFactory.create();
SummaryGeneratingListener listener = new SummaryGeneratingListener();
launcher.registerTestExecutionListeners(listener);
launcher.execute(request);
TestExecutionSummary summary = listener.getSummary();
// Do something with summary
List<UniqueIdSelector> failures = summary.getFailures().stream()
.map(Failure::getTestIdentifier)
.filter(TestIdentifier::isTest)
// Filter more to select scenarios to rerun
.map(TestIdentifier::getUniqueId)
.map(DiscoverySelectors::selectUniqueId)
.collect(Collectors.toList());
LauncherDiscoveryRequest rerunRequest = request()
.selectors(failures)
.build();
launcher.execute(rerunRequest);
TestExecutionSummary rerunSummary = listener.getSummary();
// Do something with rerunSummary
}
}

Metrics not appearing at prometheus endpoint when I'm using micrometer's PrometheusMeterRegistry

I'm new to micrometer and prometheus and I'm trying to build my first hello-world application to be monitored using micrometer with prometheus as monitoring backend. But I can't see the metrics by my app (Counters and Timers) appearing on the prometheus endpoint.
I'm following this tutorial for prometheus. I also followed this video for getting started with micrometer.
I downloaded prometheus from this link, extracted it and then ran prometheus to scrape using the command: ./prometheus --config.file=prometheus.yml. I'm having target set in this config file as targets: ['localhost:9090']
Then I ran my Main class which looks like this:
import cern.jet.random.Normal;
import cern.jet.random.engine.MersenneTwister64;
import cern.jet.random.engine.RandomEngine;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.composite.CompositeMeterRegistry;
import io.micrometer.core.instrument.logging.LoggingMeterRegistry;
import io.micrometer.jmx.JmxMeterRegistry;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import reactor.core.publisher.Flux;
import java.time.Duration;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class Main {
public static void main(String[] args) throws InterruptedException {
CompositeMeterRegistry compositeMeterRegistry = new CompositeMeterRegistry();
LoggingMeterRegistry loggingMeterRegistry = SampleMeterRegistries.loggingMeterRegistry();
JmxMeterRegistry jmxMeterRegistry = SampleMeterRegistries.jmxMeterRegistry();
// AtlasMeterRegistry atlasMeterRegistry = SampleMeterRegistries.atlasMeterRegistry();
PrometheusMeterRegistry prometheusMeterRegistry = SampleMeterRegistries.prometheus();
compositeMeterRegistry.add(loggingMeterRegistry);
compositeMeterRegistry.add(jmxMeterRegistry);
// compositeMeterRegistry.add(atlasMeterRegistry);
compositeMeterRegistry.add(prometheusMeterRegistry);
AtomicInteger latencyForThisSecond = new AtomicInteger(0);
Gauge gauge = Gauge.builder("my.guage", latencyForThisSecond, n -> n.get())
.register(compositeMeterRegistry);
Counter counter = Counter
.builder("my.counter")
.description("some description")
.tags("dev", "performance")
.register(compositeMeterRegistry);
Timer timer = Timer.builder("timer")
.publishPercentileHistogram()
.sla(Duration.ofMillis(270))
.register(compositeMeterRegistry);
// colt/colt/1.2.0 is to be added for this.
RandomEngine randomEngine = new MersenneTwister64(0);
Normal incomingRequests = new Normal(0, 1, randomEngine);
Normal duration = new Normal(250, 50, randomEngine);
latencyForThisSecond.set(duration.nextInt());
// For Flux you require io.projectreactor/reactor-core/3.2.3.RELEASE
Flux.interval(Duration.ofSeconds(1))
.doOnEach(d -> {
if (incomingRequests.nextDouble() + 0.4 > 0) {
timer.record(latencyForThisSecond.get(), TimeUnit.MILLISECONDS);
}
}).blockLast();
}
}
When I run ./prometheus --config.file=prometheus.yml, I can access the endpoint http://localhost:9090/metrics and also http://localhost:9090/graph. But when I try to execute the query on http://localhost:9090/graph sum(timer_duration_seconds_sum) / sum(timer_duration_seconds_count) it says no datapoints found.
It seems to me that I'm missing something obvious (as I'm a beginner to both of these topics).
Can someone please point out what I'm missing?
I couldn't find (where in my Main class) I have to configure the URI to publish for prometheus. Even if I'm publishing to http://localhost:9090 (which might be default hidden somewhere by micrometer) I couldn't find it.
targets: ['localhost:9090']
That's Prometheus being asked to scrape itself.
You need to add a target for the Java application's HTTP endpoint.

EMR Cluster creation using Java

I am trying to create an EMR Cluster using Java. I have created the jar file and put that into a lambda function. I'm calling the lambda from AWS Step functions. I created the maven package including the AWS JAVA SDK dependencies and also imported all the packages
import java.io.IOException;
import com.amazonaws.auth.*;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.PropertiesCredentials;
import com.amazonaws.services.elasticmapreduce.*;
import com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsRequest;
import com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsResult;
import com.amazonaws.services.elasticmapreduce.model.RunJobFlowRequest;
import com.amazonaws.services.elasticmapreduce.model.*;
import com.amazonaws.services.elasticmapreduce.model.HadoopJarStepConfig;
import com.amazonaws.services.elasticmapreduce.model.StepConfig;
import com.amazonaws.services.elasticmapreduce.util.StepFactory;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
public class CreateCluster {
public static void main(String[] args) {
AWSCredentials credentials = new BasicAWSCredentials("access key", "secret key");
// myApp={[Hadoop]};
AmazonElasticMapReduceClient emr = new AmazonElasticMapReduceClient(credentials);
String COMMAND_RUNNER = "command-runner.jar";
String DEBUGGING_COMMAND = "state-pusher-script";
String DEBUGGING_NAME = "Setup Hadoop Debugging";
StepFactory stepFactory = new StepFactory();
StepConfig enabledebugging = new StepConfig()
.withName(DEBUGGING_NAME)
.withActionOnFailure(ActionOnFailure.TERMINATE_CLUSTER)
.withHadoopJarStep(new HadoopJarStepConfig()
.withJar(COMMAND_RUNNER)
.withArgs(DEBUGGING_COMMAND));
RunJobFlowRequest request = new RunJobFlowRequest()
.withName("REMR")
.withReleaseLabel("emr-5.16.0")
.withSteps(enabledebugging)
// .withApplications(myApp)
.withLogUri("s3n://r.base.ihm/emr-log/")
.withServiceRole("service_role")
.withJobFlowRole("jobflow_role")
.withInstances(new JobFlowInstancesConfig()
.withEc2KeyName("emr")
.withEc2SubnetId("subnet-d1fbb8ee")
.withInstanceCount(3)
.withKeepJobFlowAliveWhenNoSteps(false)
.withMasterInstanceType("m4.large")
.withSlaveInstanceType("m4.large"));
RunJobFlowResult result = emr.runJobFlow(request);
}
}
but still I'm getting the error
java.lang.NoClassDefFoundError
{
"errorMessage": "Error loading class com.ihm.base.spark.CreateCluster: com/amazonaws/auth/AWSCredentials",
"errorType": "java.lang.NoClassDefFoundError"
}
Any ideas on what I am missing here?

IBM Case Manager Connection

I'm stuck at work for like a week now,
can some1 with CM JavaAPI exprience guide me what am I doing wrong?
I try to connect to the server where the Case Manger is installed and start a
session, maybe I'm doing it all wrong but IBM Knowledge Center did not help.
GOT IT!
package *packacge*;
// jars from the the CM different installation folders on the server
import java.util.List;
import java.util.Locale;
import javax.security.auth.Subject;
import com.filenet.api.core.Connection;
import com.filenet.api.core.ObjectStore;
import com.filenet.api.util.UserContext;
import com.ibm.casemgmt.api.CaseType;
import com.ibm.casemgmt.api.DeployedSolution;
import com.ibm.casemgmt.api.context.CaseMgmtContext;
import com.ibm.casemgmt.api.context.P8ConnectionCache;
import com.ibm.casemgmt.api.context.SimpleP8ConnectionCache;
import com.ibm.casemgmt.api.context.SimpleVWSessionCache;
import com.ibm.casemgmt.api.objectref.ObjectStoreReference;
public class CaseMgmtSession {
public static void main(String[] args) {
P8ConnectionCache connCache = new SimpleP8ConnectionCache();
Connection conn = connCache.getP8Connection("http://*ip of the server CM is installed on*/wsi/FNCEWS40MTOM/");
Subject subject = UserContext.createSubject(conn, *user of CM builder admin*, *pass of CM builder admin*, "FileNetP8WSI");
UserContext uc = UserContext.get();
uc.pushSubject(subject);
Locale origLocale = uc.getLocale();
uc.setLocale(Locale.ENGLISH);
CaseMgmtContext origCmctx = CaseMgmtContext.set(new CaseMgmtContext(new SimpleVWSessionCache(), connCache));
try {
// Code that calls the Case Java API or
// directly calls the CE Java API
// checking the connection is working
ObjectStore os = P8Connector.getObjectStore(*some object store name*);
ObjectStoreReference osRef = new ObjectStoreReference(os);
DeployedSolution someSolution = DeployedSolution.fetchInstance(osRef, *some deployed solution name*);
System.out.println(someSolution.getSolutionName());
List<CaseType> caseTypes = someSolution.getCaseTypes();
for(CaseType ct : caseTypes) {
System.out.println(ct.getName());
}
}
finally {
CaseMgmtContext.set(origCmctx);
uc.setLocale(origLocale);
uc.popSubject();
}
}
}
where P8Connector is a class i wrote that returns an objectstore
I dont know which version of Case Manager you are talking about. However, for 5.2.1.x, you will find ample references on IBM's site. For example - here and here.

embedded ApacheDS in my application, it fails on service.startup() -> DefaultSchemaService.getSchemaManager() returns null

I'm trying to run an embedded ApacheDS in my application. After reading: Running Apache DS embedded in my application
and http://directory.apache.org/apacheds/1.5/41-embedding-apacheds-into-an-application.html
Using the last stable version 1.5.7,
this simple example fails when executing "service.startup();"
Exception in thread "main" java.lang.NullPointerException
at org.apache.directory.server.core.schema.DefaultSchemaService.initialize(DefaultSchemaService.java:380)
at org.apache.directory.server.core.DefaultDirectoryService.initialize(DefaultDirectoryService.java:1425)
at org.apache.directory.server.core.DefaultDirectoryService.startup(DefaultDirectoryService.java:907)
at Test3.runServer(Test3.java:41)
at Test3.main(Test3.java:24)
that is, DefaultSchemaService.getSchemaManager() returns null.
source code:
import java.util.Properties;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import org.apache.directory.server.core.DefaultDirectoryService;
import org.apache.directory.server.core.partition.Partition;
import org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmPartition;
import org.apache.directory.server.ldap.LdapServer;
import org.apache.directory.server.protocol.shared.transport.TcpTransport;
import org.apache.directory.shared.ldap.entry.ServerEntry;
import org.apache.directory.shared.ldap.name.DN;
public class Test3 {
public static void main(String[] args) throws Exception {
runServer();
testClient();
}
static void runServer() throws Exception {
DefaultDirectoryService service = new DefaultDirectoryService();
service.getChangeLog().setEnabled(false);
Partition partition = new JdbmPartition();
partition.setId("apache");
partition.setSuffix("dc=apache,dc=org");
service.addPartition(partition);
LdapServer ldapService = new LdapServer();
ldapService.setTransports(new TcpTransport(1400));
ldapService.setDirectoryService(service);
service.startup();
// Inject the apache root entry if it does not already exist
try {
service.getAdminSession().lookup(partition.getSuffixDn());
} catch (Exception lnnfe) {
DN dnApache = new DN("dc=Apache,dc=Org");
ServerEntry entryApache = service.newEntry(dnApache);
entryApache.add("objectClass", "top", "domain", "extensibleObject");
entryApache.add("dc", "Apache");
service.getAdminSession().add(entryApache);
}
DN dnApache = new DN("dc=Apache,dc=Org");
ServerEntry entryApache = service.newEntry(dnApache);
entryApache.add("objectClass", "top", "domain", "extensibleObject");
entryApache.add("dc", "Apache");
service.getAdminSession().add(entryApache);
ldapService.start();
}
static void testClient() throws NamingException {
Properties p = new Properties();
p.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
p.setProperty(Context.PROVIDER_URL, "ldap://localhost:1400/");
p.setProperty(Context.SECURITY_PRINCIPAL, "uid=admin,ou=system");
p.setProperty(Context.SECURITY_CREDENTIALS, "secret");
p.setProperty(Context.SECURITY_AUTHENTICATION, "simple");
DirContext rootCtx = new InitialDirContext(p);
DirContext ctx = (DirContext) rootCtx.lookup("dc=apache,dc=org");
SearchControls sc = new SearchControls();
sc.setSearchScope(SearchControls.SUBTREE_SCOPE);
NamingEnumeration<SearchResult> searchResults = ctx.search("", "(objectclass=*)", sc);
while (searchResults.hasMoreElements()) {
SearchResult searchResult = searchResults.next();
Attributes attributes = searchResult.getAttributes();
System.out.println("searchResult.attributes: " + attributes) ;
}
}
}
It seems that ApacheDS versions 1.5.x are not backward compatible.
In ApacheDS 1.5.4 the call "service.startup();" works ok (but it fails somewhere else).
Any idea how to make this example work?
this version of ApacheDS needs to explicitely define the working directory:
service.setWorkingDirectory(new File("data"));
I recently face the similar problem, and after long googling, finally find something useful
Example for Embedded Apache Directory using 1.5.7
http://svn.apache.org/repos/asf/directory/documentation/samples/trunk/embedded-sample/src/main/java/org/apache/directory/seserver/EmbeddedADSVer157.java
and also have a look the pom.xml
Pom.xml Example for Embedded Apache Directory using 1.5.7
http://svn.apache.org/repos/asf/directory/documentation/samples/trunk/embedded-sample/pom.xml
Hope is help!

Categories

Resources