Azure App Configuration Feature Management - java

I am looking for a solution using Maven and Java (Not Spring) where I can upload all my Key and labels and flag value by Json to deploy.
When I configure my project in Jenkins it should apply all the values which are changed.
Kindly provide me some directions, I tried lot with less material on this topic

I managed to workout the solution. Basically following this Microsoft Azure Link
, but not completely solved my problem by this link though. Below is the Code Snippet which solved my problem. Code is not testable or productionable , this is just for reference.
public void process() {
String value = "{\"id\": \"test\", \"description\": \"Sample Feature\",\"enabled\": false,\"conditions\": { \"client_filters\": []}}";
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
ConfigurationClient configurationClient = new ConfigurationClientBuilder()
.connectionString(END_POINT)
.buildClient();
final ConfigurationSetting configurationSetting = new ConfigurationSetting();
configurationSetting.setKey(format(".appconfig.abc/%s", "abc"));
configurationSetting.setLabel("lable");
configurationSetting.setContentType("application/vnd.microsoft.appconfig.ff+json;charset=utf-8");
configurationSetting.setValue(value);
configurationClient.addConfigurationSettingWithResponse(configurationSetting, NONE)
}
Key points here is ".appconfig.abc" , At this point of time we don't have direct call to Feature Management , but we can add Key and labels to configuration as I said in the code snippet but with the Key as ".appconfig.abc" which you can get this info from portal. And the value should be a Json object, how we make this Json is upto you really.
Overall so much of information around the sites but none of them are connected in Java world for Azure. May be helpful to any one.
End Point , one can get from the Configuration Access Keys.

Related

Kafka: assigning a name to an internal topic that is created after map and repartition does not seem to be possible

With this simple topology, stripped down for reproducing the issue:
public KStream<PropertyValueKey, PropertyValue> configureTopology(StreamsBuilder builder, Properties serdesProps) {
KStream<PropertyValueKey, PropertyValue> propertyValues =
builder.stream(kafkaProperties.getPropertyValuesTopicName());
KStream<PropertyTypeKey, PropertyValueWithKey> propertyValuesByType =
propertyValues.map((valueKey, value) -> KeyValue.pair(
new PropertyTypeKey(valueKey.getProjectId(), valueKey.getPropertyTypeId()),
new PropertyValueWithKey(valueKey, value)),
Named.as("map1"))
.repartition();
return propertyValues;
}
I am seeing an internally created topic appname-KSTREAM-REPARTITION-0000000002-repartition
I can't seem to find a way to override this internal name. I have successfully overridden topic names in the past, when it came to stores, using Materialized.as as described in https://docs.confluent.io/platform/current/streams/developer-guide/dsl-topology-naming.html, but this doesn't work the same way as with this map function. Is there any way to do this? Named.as does not have the intended effect, or any effect that I can spot.
Thiss is with using the Kafka docker image confluentinc/cp-kafka:6.2.0 and the Java client 2.8.1, but upgrading to 3.1.0 did not matter.
It was as easy as .repartition(Repartitioned.as("property-values-by-subject-repartitioned"));, unfortunately that wasn't mentioned in the docs but it was in the javadoc after some more searching.

Accessing SecureString SSM parameters with Scala

I'm using a Scala Script in Glue to access a third party vendor with a dependent library. You can see the template I'm working off here
This solution works well, but runs with the parameters stored in the clear. I'd like to move those to AWS SSM and store them as a SecureString. To accomplish this, I believe the function would have to pull a CMK from KMS, then pull the SecureString and use the CMK to decrypt it.
I poked around the internet trying to find code examples for something as simple as pulling an SSM parameter from within Scala, but I wasn't able to find anything. I've only just started using the language and I'm not very familiar with its structure, is the expectation that aws-java libraries would also work in Scala for these kinds of operation? I've tried this but am getting compilation errors in Glue. Just for example
import software.amazon.awscdk.services.ssm.StringParameter;
object SfdcExtractData {
def main(sysArgs: Array[String]) {
print("starting")
String secureStringToken = StringParameter.valueForSecureStringParameter(this, "my-secure-parameter-name", 1); // must specify version
Gives a compilation error, although aws glue doesn't good job of telling me what the issue is.
Thank you for your time! If you have any code examples, insight, or resources please let me know. My job is running Scala 2 on Spark 2.4
was able to do this with the following code snippet
import com.amazonaws.services.simplesystemsmanagement.AWSSimpleSystemsManagementClient
import com.amazonaws.services.simplesystemsmanagement.model.GetParameterRequest
import com.amazonaws.services.simplesystemsmanagement.model.GetParameterResult
// create a client AWSSimpleSystemsManagementClient object
val client = new AWSSimpleSystemsManagementClient()
// Create a GetParameterRequest object, which send the actual request
val req = new GetParameterRequest()
// set the name of the parameter in the object.
req.setName("test")
// Only needed if the parameter is a secureString encrypted with the default kms key. If you're using a CMK you need to add the glue user as a key user. To do so, navigate to KMS console --> Customer Managed Keys --> Click on KMS key used for encryption --> Under Key policies --> Key user --> Add ( Add the Glue role )
req.setWithDecryption(true)
// call the getParameter() function on the object
val param = client.getParameter(req)
Remember to give your glue role iam permissions to ssm too!

get all issues stored in jira

hi i want to get all issues stored in jira from java using jql or any othere way.
i try to use this code:
for(String name:getProjectsNames()){
String jqlRequest = "project = \""+name+"\"";
SearchResult result = restClient.getSearchClient().searchJql(
jqlRequest, 10000,0, pm);
final Iterable<BasicIssue> issues = result.getIssues();
for (BasicIssue is : issues) {
Issue issue = restClient.getIssueClient().getIssue(is.getKey(), pm);
...........
}
it give me the result but it take a very long time.
is there a query or a rest API URL or any other way that give me all issues?
please help me
The JIRA REST API will give you all the info from each issue at a rate of a few issues/second. The Inquisitor add-on at https://marketplace.atlassian.com/plugins/com.citrix.jira.inquisitor will give you thousands of issues per second but only the standard JIRA fields.
There is one other way. There is one table in JIRA database named "dbo.jiraissue". If you have access to that database then you can fetch all the ids of all issues. After fetching this data you can send this REST request "**localhost/rest/api/2/issue/issue_id" and get JSON response. Of course you have to write some code for this but this is one way I know to get all issues.

OBJECT_NOT_FOUND when trying to getServingUrl of an image stored in GCS

I have written a Servlet where I am reading an image from blobstore, another image from GCS and then after applying a composite on both these images I am storing the composite image back in GCS.
My code works well till here.
After that, when I am trying to get the serving url for the composite image, I am getting an OBJECT_NOT_FOUND.
Just to experiment I manually uploaded a image in GCS and gave all the necessary permissions. Added the serviceaccount as OWNER and gave READ access to All users. And then again I am just trying to get the serving url. Following is my code:-
BlobKey newImageKey = blobstoreService.createGsBlobKey(gcsPath);
//log.severe("GCS PATH: " + gcsPath + " BlobKey: " + newImageKey);
ServingUrlOptions options = ServingUrlOptions.Builder.withBlobKey(newImageKey);
String profilePicLink = imgService.getServingUrl(options);
I also tried the below code:-
ServingUrlOptions options = ServingUrlOptions.Builder.withGoogleStorageFileName(gcsPath);
String profilePicLink = imgService.getServingUrl(options);
And in both the cases this is the error that I am getting:
/controller javax.servlet.ServletException:
java.lang.IllegalArgumentException: OBJECT_NOT_FOUND:
Btw, I have not enable billing as I am using the default bucket with the free quota. This is still in development so the free quota works for me.
OK, so I found out where exactly the exception is happening...
byte[] responseBytes = ApiProxy.makeSyncCall(PACKAGE, "GetUrlBase",
request.build().toByteArray());
and the exception it is throwing is :
ApiProxy.ApplicationException Application Error 8
Enabled billing and tried, still of no use :(
Have been trying to solve this the whole day and tried to search a solution everywhere.
Though this actually does not answer my original question but I have found a workaround. I installed python and gsutil and set the default acl of my bucket to read. Now when I am saving an image file in GCS I am just showing the public url link.
The above can also be achieved if in the GCSFileOptions we add .acl("public-read").
Once the acl is applied by either of the above two methods, in the GCS cloud console you can see the images shared publicly link check box comes as a dash and it says you do not have permission to edit permissions. I was getting confused seeing it, as I was expecting the checkbox to be checked.
But even in the above scenario the publicly shared link will work which is:-
http://storage.googleapis.com/[bucket_name]/[gcs_object_name]
I would still appreciate if someone can explain why the getServingUrl is not not working. Yes, it is still not working after set default acl to read.
Thanks,
Sukalpo.
I could not reproduce this issue by either uploading to Google Cloud storage via the console or via the App Engine GCS Java client. In both cases I could create a public URL for the image
even without specifying any specific permissions.
Do you want to create a production issue request,
https://code.google.com/p/googleappengine/issues/entry?template=Production%20issue
, so we can get more details about your specific case?
What is your gcsPath? I have to use:
"/gs/" + gcsFileName.getBucketName() + "/" + gcsFileName.getObjectName();
Honestly, the only way I've run into this error (and run into this unsolved question) was when I was accidentally using the wrong filename in a difficult to notice way while fetching the BlobKey using Google's APIs.
So, check the obvious things first.

LinkWalk in RIAK

I have started working on a RIAK project via Spring Source.
according to their specifications linking between objects and then linkwalking is very simple.
I am saving 2 objects, linking between them and then trying to retrieve the data:
MyPojo p1 = new MyPojo("o1", "m1");
MyPojo p2 = new MyPojo("o2", "m2");
riakManager.set(bucketName1, "k1", p1);
riakManager.set(bucketName2, "k2", p2);
riakManager.link(bucketName2, "k2", bucketName1, "k1", tagName);
System.out.println(riakManager.get(bucketName1, "k1"));
System.out.println(riakManager.linkWalk(bucketName1, "k1", "_"));
the problem is that after the link, the content of the source ("k1") is deleted, only the link stays. This is the printout:
null
[MyPojo [str1=o2, str2=m2, number=200]]
any idea why link operation deletes the value from the source?
if I try to set the sources value (again) after the link, then the link gets deleted...
thanks,
oved.
Riak requires that the link and the data are stored in one operation. You can't update one without the other (at the moment).
So any time you set the link the operation must also write back the data. I don't know does the Spring adapter take that into account. I did see some messages between the Riak and Spring developers about this, but don't know if anything was fixed yet.
But in any case I am also tending towards using the native Riak Java client rather than Spring.
have decided to abandon the spring adapter. it seems it doesn't have good enough support.
using the RIAK java client instead.
anyone think otherwise?

Categories

Resources