I am working on the K8s implementation using kubernetes java client. I am looking for the solution to delete Contour HTTPProxy which are in invalid state. However I am not able to figure out how to do it with help of Java Client.
I am aware that we can delete the ingress using below code
k8sClient.extensions().ingresses().withName("my-ingress").delete();
Any help on how to delete Contour HTTPProxy object from K8s namespace using java client will be appreciated?
Contour HTTPProxy seems to be a custom resource. You can either use our typed(required CustomResource POJOs) or typeless API(CustomResource manipulation using raw maps) for deleting HTTPProxy.
Here is an example of doing it using the typeless API(based on KubernetesClient v5.4.1):
try (KubernetesClient client = new DefaultKubernetesClient()) {
CustomResourceDefinitionContext context = new CustomResourceDefinitionContext.Builder()
.withKind("HTTPProxy")
.withPlural("httpproxies")
.withGroup("projectcontour.io")
.withVersion("v1")
.withScope("Namespaced")
.build();
boolean isDeleted = client.customResource(context).inNamespace("default").withName("root").delete();
if (!isDeleted) {
logger.warn("Unable to Delete HTTPProxy {} in {} namespace", "root", "default");
}
logger.info("HTTPProxy {} successfully deleted.", "root");
} catch (KubernetesClientException exception) {
logger.error("Exception in interacting with Kubernetes API", exception);
}
Related
I am using java to connect to JanusGraph using gremlin and using following code to create Vertex and Edge. Currently i am using g.tx().commit() as part of client.submit as shown below code :
try {
String sessionId = UUID.randomUUID().toString();
Client client = cluster.connect(sessionId);
client.submit("graph.tx().open()");
client.submit("g.addV('Person').property('Name', 'Justin').next()");
**client.submit("graph.tx().commit()");**
List<Result> rs = client.submit("g.V().count()").all().join();
System.out.println("Result size is "+rs.size());
System.out.println(rs.get(0).getString());
client.closeAsync();
} catch (Exception e) {}
So want to know if there is any other more appropriate way to handle transactions using java or this is the only way to do so.
Thanks,
Atul.
If you are submitting requests to a remote JanusGraph server then that is the way to do it. You use connect(<sessionId>) to create a session and then submit scripts against it. In the recently released TinkerPop 3.5.0 however there are changes to that rule. You can now do bytecode based sessions as well as script based sessions which means that the transaction API is now unified for both embedded and remote use cases. You can see more in the 3.5.0 Upgrade Documentation found here.
The 3.5.0 release is quite recent, having only been announced a couple of weeks ago. As a result at the time of this answer, JanusGraph does not yet support it (though work has started on it here). Until you are on a release of JanusGraph that supports TinkerPop 3.5.0 you have two options for transactions:
The one you are doing for remote use cases or,
Use JanusGraph in the embedded style.
For the latter, as taken from the documentation in the link provided:
graph = JanusGraphFactory.open("berkeleyje:/tmp/janusgraph")
juno = graph.addVertex() //Automatically opens a new transaction
juno.property("name", "juno")
graph.tx().commit() //Commits transaction
public boolean transactionExample() {
System.out.println("Begin Transaction");
Transaction tx = g.tx();
String id = "123321";
GraphTraversalSource gtx = tx.begin();
try {
gtx.addV("T").property(T.id, id).next();
System.out.println("Searching before commit ==> " + gtx.V().hasId(id).elementMap().next());
if (2/0 == 0) {
throw new TransactionException("throwing exception");
}
tx.commit();
System.out.println("Committed Transaction");
} catch (Exception ex) {
System.out.println("Catching exception " + ex);
System.out.println(gtx);
tx.rollback();
System.out.println("Rollbacked Transaction");
}
System.out.println(gtx.tx().isOpen());
return true;
}
For more information refer https://github.com/m-thirumal/gremlin-dsl
I am working on a Java library with some services based on xmpp. For XMPP communication, I use Smack version 4.3.4. The development has so far been without problems and I have also created some test routines that can all be run without errors. After I migrated to a Maven project to generate a FatJar, I wanted to convert the executable test cases into JUnit tests. Unexpectedly, an error occurs, the reason of which I cannot explain. As I said, the code can be run outside of JUnit without any problems.
Below is the simplified test code (establishing a connection to the xmpp server):
#Test
public void connect()
{
Builder builder = XMPPTCPConnectionConfiguration.builder();
builder.setSecurityMode(SecurityMode.disabled);
builder.setUsernameAndPassword("iec61850client", "iec61850client");
builder.setPort(5222);
builder.setSendPresence(true);
try
{
builder.setXmppDomain("127.0.0.1");
builder.setHostAddress(InetAddress.getByName("127.0.0.1"));
}
catch (Exception e)
{
e.printStackTrace();
}
XMPPTCPConnectionConfiguration config = builder.build();
XMPPTCPConnection c = new XMPPTCPConnection(config);
c.setReplyTimeout(5000);
try
{
c.connect().login();
}
catch (Exception e)
{
e.printStackTrace();
}
}
And here is the error message I get:
Exception in thread "Smack Reader (0)" java.lang.AssertionError
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:1154)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$1000(XMPPTCPConnection.java:1092)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:1112)
In Smack it boils down to this 'assert' instruction:
assert (config.getXMPPServiceDomain().equals(reportedServerDomain));
Any idea what the problem might be or similar problems? I'm grateful for any help!
Thanks a lot,
Markus
If you look at the source code you will find that reportedServerDomain is extracted from the server's stream open tag. In this case the xmpp domain reported by the server does not match the one that is configured. This should usually not happen, but I assume it is related to the way you run the unit tests. Or more precisely, related to the remote server or mocked server that is used in the tests. If you enable smack's debug output, you will see the stream open tag and the 'from' attribute and its value. Compare this with the configured XMPP service domain in the ConnectionConfiguration.
I am getting "extension (5) should not be presented in certificate_request" when trying to run locally a Java Kubernetes client application which queries the Kubernetes cluster over a lube proxy connection. Any thoughts? Thanks in advance
ApiClient client = null;
try {
client = Config.defaultClient();
//client.setVerifyingSsl(false);
} catch (IOException e) {
e.printStackTrace();
}
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1PodList list = null;
try {
list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
} catch (ApiException e) {
e.printStackTrace();
}
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
Which version of Java are you using?
JDK 11 onwards have support for TLS 1.3 which can cause the error extension (5) should not be presented in certificate_request.
Add -Djdk.tls.client.protocols=TLSv1.2 to the JVM args to make it use 1.2 instead.
There is an issue on Go lang relating to this https://github.com/golang/go/issues/35722 and someone there also posted to disable TLS 1.3 on the Java side
Alternatively, upgrade your JDK to a more recent version to fix the problem.
Some min versions with this fix are: openjdk8u272, 11.0.7, 14.0.2
Instead of connecting via kubectl proxy connect to Kubernetes API Server directly from the application by providing a kubeconfig file to the Java client.
I'm attempting to use the GCP Java SDK to send messages to a Pub/Sub topic using the following code (replaced the actual project ID and topic name with placeholders in this snippet):
Publisher publisher = null;
ProjectTopicName topic = ProjectTopicName.newBuilder()
.setProject("MY_PROJECT_ID")
.setTopic("MY_TOPIC")
.build();
try {
publisher = Publisher.newBuilder(topic).build();
for (final String message : data) {
ByteString messageBytes = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(messageBytes).build();
ApiFuture<String> future = publisher.publish(pubsubMessage);
}
} catch (IOException ex) {
ex.printStackTrace();
} finally {
if (publisher != null) {
publisher.shutdown();
}
}
This results in the following exception:
Exception in thread "main" java.lang.AbstractMethodError: com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.needsCredentials()Z
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157)
at com.google.cloud.pubsub.v1.stub.GrpcPublisherStub.create(GrpcPublisherStub.java:164)
at com.google.cloud.pubsub.v1.Publisher.<init>(Publisher.java:171)
at com.google.cloud.pubsub.v1.Publisher.<init>(Publisher.java:85)
at com.google.cloud.pubsub.v1.Publisher$Builder.build(Publisher.java:718)
at com.westonsankey.pubsub.MessageWriter.sendMessagesToPubSub(MessageWriter.java:35)
at com.westonsankey.pubsub.MessageWriter.main(MessageWriter.java:24)
I've set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to the JSON private key file, and have confirmed that I can access other GCP resources in this application using that private key. The service account has project owner, and I've verified via the Pub/Sub console that the service account has the appropriate permissions.
Are there any extra steps required to authenticate with Pub/Sub?
The problem isn't accessing the credentials. It looks like this is a version conflict on the gax-java library. The needsCredentials method was added in v1.46 in June 2019. Perhaps you are explicitly depending on an older version or another dependency is pulling in an older version and is leaking the version they pull in. If it's the former, update to pull in version 1.46 or later. If it's the latter, you may need to shade the dependency.
I am trying to connect to an external weblogic embeded LDAP in Oracle ADF.
I've just found a good sample code that uses JpsContextFactory class, it doesnt get any url, username and password. it seems that it connects to local weblogic ldap by defult. I could not figure out how to set a connection to an external weblogic ldap using this class.
the sample code :
private void initIdStoreFactory() {
JpsContextFactory ctxFactory;
try {
ctxFactory = JpsContextFactory.getContextFactory();
JpsContext ctx = ctxFactory.getContext();
LdapIdentityStore idStoreService = (LdapIdentityStore) ctx.getServiceInstance(IdentityStoreService.class);
ldapFactory = idStoreService.getIdmFactory();
storeEnv.put(OIDIdentityStoreFactory.RT_USER_SEARCH_BASES, USER_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_GROUP_SEARCH_BASES, GROUP_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_USER_CREATE_BASES, USER_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_GROUP_CREATE_BASES, GROUP_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_GROUP_SELECTED_CREATE_BASE, GROUP_BASES[0]);
storeEnv.put(OIDIdentityStoreFactory.RT_USER_SELECTED_CREATE_BASE, USER_BASES[0]);
} catch (JpsException e) {
e.printStackTrace();
throw new RuntimeException("Jps Exception encountered", e);
}
}
any suggestion how to use this code to connect to external ldap will be appreciated.
JpsContextFactory is utilised to retrieve the current information of the identity store(s) inside weblogic. In order to use it with an external LDAP, you need first to add a new security provider in Weblogic and declare it as required in order for your application to utilise the new external ldap.
Check this old article of how to do it (http://www.itbuzzpress.com/weblogic-tutorials/securing-oracle-weblogic/configuring-oracle-weblogic-security-providers.html)