How to perform system property operations in WildFly via REST? - java

This documentation states that one can perform certain operations for a WildFly server via REST: https://docs.jboss.org/author/display/WFLY10/The%20HTTP%20management%20API.html
However, there is no example how to add/remove/read a system property. I have no idea how the HTTP body has to look for those calls.
The answer of the following StackOverflow question says that the class SimpleOperation used in the example does not really exist: Wildfly 10 management Rest API
I would like to do the following operations:
/system-property=BLA:remove
/system-property=BLA:add(value="1,2,3,4")
and to read it.
How can I perform these operations via REST with the WildFly HTTP management API? Ideally, I would use a Java API if there was one.

With the org.wildfly.core:wildfly-controller-client API you could do something like this:
try (ModelControllerClient client = ModelControllerClient.Factory.create("localhost", 9990)) {
final ModelNode address = Operations.createAddress("system-property", "test.property");
ModelNode op = Operations.createRemoveOperation(address);
ModelNode result = client.execute(op);
if (!Operations.isSuccessfulOutcome(result)) {
throw new RuntimeException("Failed to remove property: " + Operations.getFailureDescription(result).asString());
}
op = Operations.createAddOperation(address);
op.get("value").set("test-value");
result = client.execute(op);
if (!Operations.isSuccessfulOutcome(result)) {
throw new RuntimeException("Failed to add property: " + Operations.getFailureDescription(result).asString());
}
}
You can use the REST API too, however you'll need to have a way to do digest authentication.
Client client = null;
try {
final JsonObject json = Json.createObjectBuilder()
.add("address", Json.createArrayBuilder()
.add("system-property")
.add("test.property.2"))
.add("operation", "add")
.add("value", "test-value")
.build();
client = ClientBuilder.newClient();
final Response response = client.target("http://localhost:9990/management/")
.request()
.header(HttpHeaders.AUTHORIZATION, "Digest <settings>")
.post(Entity.json(json));
System.out.println(response.getStatusInfo());
} finally {
if (client != null) client.close();
}

Related

Vertx client is taking time to check for failure

I have a requirement where I am connecting one microservice to other microservice via Vertx client. In the code I am checking if another microservice is down then on failure it should create some JsonObject with solrError as key and failure message as value. If there is a solr error I mean if other microservice is down which is calling solr via load balancing then it should throw some error response. But Vertx client is taking some time to check on failure and when condition is checked that time there is no solrError in jsonobject as Vertx client is taking some time to check for failure so condition fails and resp is coming as null. In order to avoid this what can be done so that Vertx client fails before the condition to check for solrError and returns Internal server error response?
Below is the code :
solrQueryService.executeQuery(query).subscribe().with(jsonObject -> {
ObjectMapper objMapper = new ObjectMapper();
SolrOutput solrOutput = new SolrOutput();
List<Doc> docs = new ArrayList<>();
try {
if(null != jsonObject.getMap().get("solrError")){
resp = Response.status(Response.Status.INTERNAL_SERVER_ERROR)
.entity(new BaseException(
exceptionService.processSolrDownError(request.header.referenceId))
.getResponse()).build();
}
solrOutput = objMapper.readValue(jsonObject.toString(), SolrOutput.class);
if (null != solrOutput.getResponse()
&& CollectionUtils.isNotEmpty(solrOutput.getResponse().getDocs())) {
docs.addAll(solrOutput.getResponse().getDocs());
uniDocList = Uni.createFrom().item(docs);
}
} catch (JsonProcessingException e) {
e.printStackTrace();
}
});
if(null!=resp && resp.getStatus() !=200) {
return resp ;
}
SolrQueryService is preparing query and send out URL and query to Vertx web client as below :
public Uni<JsonObject> search(URL url, SolrQuery query,Integer timeout) {
int port = url.getPort();
if (port == -1 && "https".equals(url.getProtocol())) {
port = 443;
}
if (port == -1 && "http".equals(url.getProtocol())) {
port = 80;
}
HttpRequest<Buffer> request = client.post(port, url.getHost(), url.getPath()).timeout(timeout);
return request.sendJson(query).map(resp -> {
return resp.bodyAsJsonObject();
}).onFailure().recoverWithUni(f -> {
return Uni.createFrom().item(new JsonObject().put("solrError", f.getMessage()));
});
}
I have not used the Vertx client but assume its reactive and non-blocking. Assuming this is the case, your code seems to be mixing imperative and reactive constructs. The subscribe in the first line is reactive and the lambda you provide will be called when the server responds to the client request. However, after the subscribe, you have imperative code which runs before the lambda even has a chance to be called so your checks and access to the "resp" object will never be a result of what happened in the lambda itself.
You need to move all the code into the lambda or at least make subsequent code chain onto the result of the subscribe.

Different behavior of OkHttp3 on Linux and Windows

I have some code which works ok on windows platform, however, the code gives a different behavior on Linux.
I have used the following code to submit a request to an HTTP server to get some messages. what I have done as follows
deploy the code on my local windows machine, then trigger a request and get the server response.
parameters:
{"articleid":"","endtime":"2019-10-29T18:00:00","starttime":"2019-10-29T16:00:00","areaid":"","title":"","pageIndex":"1"}
server response:
{"result":1,"errorcode":"","message":"","pageindex":1,"nextpage":2,"pagesize":100,"data":[...
some data here ...]}
deploy the code on a Linux server, trigger the request with the same parameters in step 1, however, the server response is different.
parameters:
{"articleid":"","endtime":"2019-10-29T18:00:00","starttime":"2019-10-29T16:00:00","areaid":"","title":"","pageIndex":"1"}
server response:
{"result":1,"errorcode":"","message":"","pageindex":1,"nextpage":null,"pagesize":0,"data":[]}
We have looked through the code but can not find what causes the different behaviors.
I suppose there may exist one/some java class files with the same name in different jars, and windows/Linux load different class files then cause the problem, but after looking through the jar file, I also have no ideas. the okhttp related jar files are as following:
okhttp-3.10.0.jar
okio-1.14.0.jar
netty-codec-http-4.1.31.Final.jar
httpcore-nio-4.4.10.jar
httpcore-4.4.10.jar
httpclient-4.5.6.jar
httpasyncclient-4.1.4.jar
public static String okHttpPost(String requestUrl,Map<String,String> map,String orgId,String taskID) throws IOException {
String exceptionMessage="";
String responseResult="";
try {
FormBody.Builder newFormBody = new FormBody.Builder();
Set<String> keys = map.keySet();
for(String key:keys){
newFormBody.add(key,map.get(key));
}
RequestBody body = newFormBody.build();
log.info("server url : "+requestUrl+";paramters:"+new ObjectMapper().writeValueAsString(map));
Request request = new Request.Builder()
.url(requestUrl)
.post(body)
.build();
Call call = okHttpClient.newCall(request);
Response response = call.execute();
if (response.code() != 200) {
exceptionMessage = "request failed, taskID:" + taskID + "orgid:" + orgId + "response mesage:"+response.toString();
log.info(exceptionMessage);
}
responseResult = response.body().string();
log.info("server url : " + requestUrl + ", reponse messages:"+responseResult);
return responseResult;
} catch (IOException e) {
e.printStackTrace();
}finally {
if (!responseResult.contains("token")) {
do some thing;
}
}
return null;
}
Can give any ideas on why the same code behaviors different on windows and Linux platform?
How to change the code to let it works well on Linux?

Importing csv data from Storage to Cloud SQL not working - status always "pending"

I am new to java (I have experience with C# though)
Sadly, I inherited a terrible project (the code is terrible) and what I need to accomplish is to import some csv files into Cloud SQL
So there's a WS which runs this task, apparently the dev followed this guide to import data. But it is not working. Here's the code (Essential parts, actually it is longer and more ugly)
InstancesImportRequest requestBody = new InstancesImportRequest();
ImportContext ic = new ImportContext();
ic.setKind("sql#importContext");
ic.setFileType("csv");
ic.setUri(bucketPath);
ic.setDatabase(CLOUD_SQL_DATABASE);
CsvImportOptions csv = new CsvImportOptions();
csv.setTable(tablename);
List<String> list = new ArrayList<String>();
// here there is some code that populates the list with the columns
csv.setColumns(list);
ic.setCsvImportOptions(csv);
requestBody.setImportContext(ic);
SQLAdmin sqlAdminService = createSqlAdminService();
SQLAdmin.Instances.SQLAdminImport request = sqlAdminService.instances().sqladminImport(project, instance, requestBody);
Operation response = request.execute();
System.out.println("Executed : Going to sleep.>"+response.getStatus());
int c = 1;
while(!response.getStatus().equalsIgnoreCase("Done")){
Thread.sleep(10000);
System.out.println("sleeped enough >"+response.getStatus());
c++;
if(c==50){
System.out.println("timeout?");
break;
}
}
public static SQLAdmin createSqlAdminService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new SQLAdmin.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-SQLAdminSample/0.1")
.build();
}
I am not quite sure how response should be treated, it seems it is an async request. Either way, I always get status Pending; it seems it is not even start to executing.
Of course it ends timing out. What is wrong here, why the requests never starts ? I couldn't find any actual example on the internet about using this java sdk to import files, except the link I gave above
Well, the thing is that the response object is static, so it will always return "Pending" as the initial status since it is a string in the object - it is not actually being updated.
To get the actual status, you have to requested it to google using the sdk. I did something like this (it will be better to use a smaller sleep time, and make it grow as you try more times)
SQLAdmin.Instances.SQLAdminImport request = sqlAdminService.instances().sqladminImport(CLOUD_PROJECT, CLOUD_SQL_INSTANCE, requestBody);
// execution of our import request
Operation response = request.execute();
int tried = 0;
Operation statusOperation;
do {
// sleep one minute
Thread.sleep(60000);
// here we are requesting the status of our operation. Name is actually the unique identifier
Get requestStatus = sqlAdminService.operations().get(CLOUD_PROJECT, response.getName());
statusOperation = requestStatus.execute();
tried++;
System.out.println("status is: " + statusOperation.getStatus());
} while(!statusOperation.getStatus().equalsIgnoreCase("DONE") && tried < 10);
if (!statusOperation.getStatus().equalsIgnoreCase("DONE")) {
throw new Exception("import failed: Timeout");
}

How to connect to ElasticSearch with Java transport client?

I am following the ElasticSearch documentation on Java Client. I have started ElasticSearch and I can interact with it with the Rest API. I want to use the Java Client and so far I have a main like this:
public class TestElastic {
public static void main(String[] args) {
try{
TransportClient client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
JSONObject place = new JSONObject();
place.put("name", "AAAAA");
IndexResponse response = client.prepareIndex("my_database", "places", "1")
.setSource(place)
.get();
System.out.println(response.toString());
// Index name
String _index = response.getIndex();
System.out.println(_index);
// Type name
String _type = response.getType();
System.out.println(_type);
// Document ID (generated or not)
String _id = response.getId();
System.out.println(_id);
// Version (if it's the first time you index this document, you will get: 1)
long _version = response.getVersion();
System.out.println(_version);
// isCreated() is true if the document is a new one, false if it has been updated
boolean created = response.isCreated();
System.out.println(created);
client.close();
}catch (Exception ex){
ex.printStackTrace();
}
}
}
In the Java logs I can see that there is a connection with 127.0.0.1:9300. But after the "prepare index" command I do not see any error and nothing is printed(I have some system out commands). In the ElasticSearch logs is also nothing relative. When I create an index with the Rest API I can see this in the logs.
Ok, as #Val mentioned I forgot to print the errors. The problem was that JSONObject is not the format that ElasticSearch wants. Map and HashMap are acceptable.

Spring Security 3.0 Google Apps open id sign in using OpenID4Java

I try to sign in using Google Apps open id with OpenID4Java library.
I discover the user's service using the following code in the consumer class:
try
{
discoveries = consumerManager.discover(identityUrl);
}
catch (DiscoveryException e)
{
throw new OpenIDConsumerException("Error during discovery", e);
}
DiscoveryInformation information = consumerManager.associate(discoveries);
HttpSession session = req.getSession(true);
session.setAttribute(DiscoveryInformation.class.getName(), information);
AuthRequest authReq;
try
{
authReq = consumerManager.authenticate(information, returnToUrl, realm);
// check for OpenID Simple Registration request needed
if (attributesByProvider != null || defaultAttributes != null)
{
//I set the attributes needed for getting the email of the user
}
}
catch (Exception e)
{
throw new OpenIDConsumerException("Error processing ConumerManager authentication", e);
}
return authReq.getDestinationUrl(true);
Next I get the parameters from the http request and in the openid.claimed_id property I receive "http://domain.com/openid?id=...." and if I try to verify the response consumerManager.verify(receivingURL.toString(), openidResp, discovered); an exception is thrown: org.openid4java.discovery.yadis.YadisException: 0x706: GET failed on http://domain.com/openid?id=... : 404:Not Found.
To avoid the exception I tried to modify the parameter list changing the value "http://domain.com/openid?id=...." to "https://www.google.com/a/domain.com/openid?id=...."
// extract the receiving URL from the HTTP request
StringBuffer receivingURL = request.getRequestURL();
String queryString = request.getQueryString();
// extract the parameters from the authentication response
// (which comes in as a HTTP request from the OpenID provider)
ParameterList openidResp = new ParameterList(request.getParameterMap());
Parameter endPoint = openidResp.getParameter("openid.op_endpoint");
if (endPoint != null && endPoint.getValue().startsWith("https://www.google.com/a/"))
{
Parameter parameter = openidResp.getParameter("openid.claimed_id");
if (parameter != null)
{
String value = "https://www.google.com/a/" + parameter.getValue().replaceAll("http://", "");
openidResp.set(new Parameter("openid.claimed_id", value));
queryString = queryString.replaceAll("openid.claimed_id=http%3A%2F%2F", "openid.claimed_id=https%3A%2F%2Fwww.google.com%2Fa%2F");
}
parameter = openidResp.getParameter("openid.identity");
if (parameter != null)
{
String value = "https://www.google.com/a/" + parameter.getValue().replaceAll("http://", "");
openidResp.set(new Parameter("openid.identity", value));
queryString = queryString.replaceAll("openid.claimed_id=http%3A%2F%2F", "openid.claimed_id=https%3A%2F%2Fwww.google.com%2Fa%2F");
}
}
if ((queryString != null) && (queryString.length() > 0))
{
receivingURL.append("?").append(queryString);
}
// retrieve the previously stored discovery information
DiscoveryInformation discovered = (DiscoveryInformation) request.getSession().getAttribute(DiscoveryInformation.class.getName());
// verify the response
VerificationResult verification;
Map userDetails = new HashMap();
try
{
verification = consumerManager.verify(receivingURL.toString(), openidResp, discovered);
// check for OpenID Simple Registration request needed
if (attributesByProvider != null || defaultAttributes != null)
{
//Here I get the value of requested attributes
}
}
catch (Exception e)
{
throw new OpenIDConsumerException("Error verifying openid response", e);
}
// examine the verification result and extract the verified identifier
Identifier verified = null;
if (verification != null)
{
verified = verification.getVerifiedId();
}
OpenIDAuthenticationToken returnToken;
List attributes = null;
if (verified != null)
returnToken = new OpenIDAuthenticationToken(OpenIDAuthenticationStatus.SUCCESS, verified.getIdentifier(), "some message", attributes);
else
{
Identifier id = discovered.getClaimedIdentifier();
return new OpenIDAuthenticationToken(OpenIDAuthenticationStatus.FAILURE, id == null ? "Unknown" : id.getIdentifier(), "Verification status message: [" + verification.getStatusMsg() + "]", attributes);
}
Now the method consumerManager.verify is not throwing anymore exception, but its status is changed to failed. In log the following errors appear
09:46:45,424 ERROR ConsumerManager,http-80-1:1759 - No service element found to match the ClaimedID / OP-endpoint in the assertion.
09:46:45,428 ERROR ConsumerManager,http-80-1:1183 - Discovered information verification failed.
I saw on a forum a similar problem, but the solution was to change consumerManager.verify to consumerManager.verifyNonce. I'm not sure if using this method will not create a security issue. Do you have any idea what should I change to make my open id consumer to work with Google Apps openid?
Google Apps uses a slightly different discovery process than what is supported in the base version of OpenID4Java. There's an add-on library at http://code.google.com/p/step2/ that you might fight useful (and a higher level wrapper at http://code.google.com/p/openid-filter/.)
I'm not aware of anyone that has done Spring Security integration with the modified Step2 classes, but it shouldn't be too hard to modify the code to set up Step2 appropriately. It's built on OpenID4Java and the code to write a relying party is mostly the same.

Categories

Resources