I am trying to use the google cloud endpoints Java from android as such:
client:
Core.Builder coreBuilder = new Core.Builder(
AndroidHttp.newCompatibleTransport(), new GsonFactory(), null);
coreBuilder.setApplicationName("myapp");
if (MainActivity.ENDPOINTS_URL != null) {
coreBuilder.setRootUrl(MainActivity.ENDPOINTS_URL);
coreBuilder.setGoogleClientRequestInitializer(new GoogleClientRequestInitializer() {
public void initialize(AbstractGoogleClientRequest<?> request)
throws IOException {
request.setDisableGZipContent(true);
}
});
}
Core core = coreBuilder.build();
myList = core.asdf("x=&+x", myObject);
server:
#ApiMethod(name = "asdf")
public List<String> asdf(#Named("param1") String param1, MyObject myObject) {
if (param1.equals("x=&+x")) {
//should go here, but never does
}
...
While it mostly works, somehow the param1 string does not get correctly transmitted, meaning that "x=&+x" arrives at the server as "x=&%2Bx". Is this a known bug? Or do arguments have to be manually encoded somehow? Or is this somehow particular to my environment?
Appengine SDK V1.8.8 for java, google api 1.17.0-rc, using the dev environment.
Cheers,
Andres
Related
Am building a wrapper library using apache-flink where I am listening(consuming) from multiple topics and I have a set of applications that want to process the messages from those topics.
Example :
I have 10 applications app1, app2, app3 ... app10 (each of them is a java library part of the same on-prem project, ie., all 10 jars are part of same .war file)
out of which only 5 are supposed to consume the messages coming to the consumer group. I am able to do filtering for 5 apps with the help of filter function.
The challenge is in the strStream.process(executionServiceInterface) function, where app1 provides an implementation class for ExceucionServiceInterface as ExecutionServiceApp1Impl and similary app2 provides ExecutionServiceApp2Impl.
when there are multiple implementations available spring wants us to provide #Qualifier annotation or #Primary has to be marked on the implementations (ExecutionServiceApp1Impl , ExecutionServiceApp2Impl).
But I don't really want to do this. As am building a generic wrapper library that should support any no of such applications (app1, app2 etc) and all of them should be able to implement their own implementation logic(ExecutionServiceApp1Impl , ExecutionServiceApp2Impl).
Can someone help me here ? how to solve this ?
Below is the code for reference.
#Autowired
private ExceucionServiceInterface executionServiceInterface;
public void init(){
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkKafkaConsumer011<String> consumer = createStringConsumer(topicList, kafkaAddress, kafkaGroup);
if (consumer != null) {
DataStream<String> strStream = environment.addSource(consumer);
strStream.filter(filterFunctionInterface).process(executionServiceInterface);
}
}
public FlinkKafkaConsumer011<String> createStringConsumer(List<String> listOfTopics, String kafkaAddress, String kafkaGroup) throws Exception {
FlinkKafkaConsumer011<String> myConsumer = null;
try {
Properties props = new Properties();
props.setProperty("bootstrap.servers", kafkaAddress);
props.setProperty("group.id", kafkaGroup);
myConsumer = new FlinkKafkaConsumer011<>(listOfTopics, new SimpleStringSchema(), props);
} catch(Exception e) {
throw e;
}
return myConsumer;
}
Many thanks in advance!!
Solved this problem by using Reflection, below is the code that solved the issue.
Note : this requires me to know the list of fully qualified classNames and method names along with their parameters.
#Component
public class SampleJobExecutor extends ProcessFunction<String, String> {
#Autowired
MyAppProperties myAppProperties;
#Override
public void processElement(String inputMessage, ProcessFunction<String, String>.Context context,
Collector<String> collector) throws Exception {
String className = null;
String methodName = null;
try {
Map<String, List<String>> map = myAppProperties.getMapOfImplementors();
JSONObject json = new JSONObject(inputMessage);
if (json != null && json.has("appName")) {
className = map.get(json.getString("appName")).get(0);
methodName = map.get(json.getString("appName")).get(1);
}
Class<?> forName = Class.forName(className);
Object job = forName.newInstance();
Method method = forName.getDeclaredMethod(methodName, String.class);
method.invoke(job , inputMessage);
} catch (Exception e) {
e.printStackTrace();
}
}
I want to use Playwright.connect() method using Proxy to consume Browserless. According to Browserless doc.
https://docs.browserless.io/docs/playwright.html
The standard connect method uses playwright's built-in browser-server
to handle the connection. This, generally, is a faster and more
fully-featured method since it supports most of the playwright
parameters (such as using a proxy and more). However, since this
requires the usage of playwright in our implementation, things like
ad-blocking and stealth aren't supported. In order to utilize those,
you'll need to see our integration with connectOverCDP.
I thought well connect will have a .setProxy(), Like launch()
browserType.launch(new BrowserType.LaunchOptions().setProxy(proxy));
But connect methods it has 2 variations
default Browser connect(String wsEndpoint) {
return connect(wsEndpoint, null);
}
Browser connect(String wsEndpoint, ConnectOptions options);
I thought well i will pick connect + ConnectOptions it sures has a .setProxy as well but it doesn't.
class ConnectOptions {
public Map<String, String> headers;
public Double slowMo;
public Double timeout;
public ConnectOptions setHeaders(Map<String, String> headers) {
this.headers = headers;
return this;
}
public ConnectOptions setSlowMo(double slowMo) {
this.slowMo = slowMo;
return this;
}
public ConnectOptions setTimeout(double timeout) {
this.timeout = timeout;
return this;
}
}
I have try this
final Browser.NewContextOptions browserContextOptions = new Browser.NewContextOptions().setProxy(proxy);
Browser browser = playwright.chromium()
.connect("wss://&--proxy-server=http://myproxyserver:1111")
.newContext(browserContextOptions)
.browser();
browser.newPage("resource");
But the proxy returns authentication is required.
I'm confused now Browserless says that .connect could provide a Proxy but how? Is browserless wrong? Or am I missing something? I'm new on this technology.
I have tried as well using page.setExtraHTTPHeaders.
private void applyProxyToPage(final Page page,final String
userPassCombination){
final String value = "Basic "+Base64.getEncoder().encodeToString(userPassCombination.getBytes(Charset.forName("UTF-8")));
page.setExtraHTTPHeaders(Collections.singletonMap("Authorization",value));
//page.setExtraHTTPHeaders(Collections.singletonMap("Proxy-Authorization",value));// Not working either
}
With the help of my friend Alejandro Loyola at Browserless, I am now able to connect. I will post the snippet:
private String navigateWithPlaywrightInBrowserlessWithProxy(final String token,final String proxyHost,final String userName,final String userPass,final String url){
final Browser.NewContextOptions browserContextOptions = new Browser.NewContextOptions()
.setProxy(new Proxy(proxyHost)
.setUsername(userName)
.setPassword(userPass));//Raw password not encoded in any way;
try (final Playwright playwright = Playwright.create(); Browser browser = playwright.chromium().connectOverCDP("wss://chrome.browserless.io?token=" + token);final BrowserContext context = browser.newContext(browserContextOptions);){
Page page = context.newPage();
page.route("**/*.svg", Route::abort);
page.route("**/*.png", Route::abort);
page.route("**/*.jpg", Route::abort);
page.route("**/*.jpeg", Route::abort);
page.route("**/*.css", Route::abort);
page.route("**/*.scss", Route::abort);
page.navigate(url, new Page.NavigateOptions()
.setWaitUntil(WaitUntilState.DOMCONTENTLOADED));
return page.innerHTML("body");
}
}
My gotchas were as follows.
I was using:
"wss://chrome.browserless.io/playwright?token=
Instead of:
"wss://chrome.browserless.io?token="
And use:
connectOverCDP
I am using AWS Java SDK in my application to talk to one of my S3 buckets which holds objects in JSON format.
A document may look like this:
{
"a" : dataA,
"b" : dataB,
"c" : dataC,
"d" : dataD,
"e" : dataE
}
Now, for a certain document lets say document1 I need to fetch the values corresponding to field a and b instead of fetching the entire document.
This sounds like something that wouldn't be possible because S3 buckets can have any type of documents in them and not just JSONs.
Is this something that is achievable though?
That's actually doable. You could do selects like you've described, but only for particular formats: JSON, CSV, Parquet.
Imagine having a data.json file in so67315601 bucket in eu-central-1:
{
"a": "dataA",
"b": "dataB",
"c": "dataC",
"d": "dataD",
"e": "dataE"
}
First, learn how to select the fields via the S3 Console. Use "Object Actions" → "Query with S3 Select":
AWS Java SDK 1.x
Here is the code to do the select with AWS Java SDK 1.x:
#ExtendWith(S3.class)
class SelectTest {
#AWSClient(endpoint = Endpoint.class)
private AmazonS3 client;
#Test
void test() throws IOException {
// LINES: Each line in the input data contains a single JSON object
// DOCUMENT: A single JSON object can span multiple lines in the input
final JSONInput input = new JSONInput();
input.setType(JSONType.DOCUMENT);
// Configure input format and compression
final InputSerialization inputSerialization = new InputSerialization();
inputSerialization.setJson(input);
inputSerialization.setCompressionType(CompressionType.NONE);
// Configure output format
final OutputSerialization outputSerialization = new OutputSerialization();
outputSerialization.setJson(new JSONOutput());
// Build the request
final SelectObjectContentRequest request = new SelectObjectContentRequest();
request.setBucketName("so67315601");
request.setKey("data.json");
request.setExpression("SELECT s.a, s.b FROM s3object s LIMIT 5");
request.setExpressionType(ExpressionType.SQL);
request.setInputSerialization(inputSerialization);
request.setOutputSerialization(outputSerialization);
// Run the query
final SelectObjectContentResult result = client.selectObjectContent(request);
// Parse the results
final InputStream stream = result.getPayload().getRecordsInputStream();
IOUtils.copy(stream, System.out);
}
}
The output is:
{"a":"dataA","b":"dataB"}
AWS Java SDK 2.x
The code for the AWS Java SDK 2.x is more cunning. Refer to this ticket for more information.
#ExtendWith(S3.class)
class SelectTest {
#AWSClient(endpoint = Endpoint.class)
private S3AsyncClient client;
#Test
void test() throws Exception {
final InputSerialization inputSerialization = InputSerialization
.builder()
.json(JSONInput.builder().type(JSONType.DOCUMENT).build())
.compressionType(CompressionType.NONE)
.build();
final OutputSerialization outputSerialization = OutputSerialization.builder()
.json(JSONOutput.builder().build())
.build();
final SelectObjectContentRequest select = SelectObjectContentRequest.builder()
.bucket("so67315601")
.key("data.json")
.expression("SELECT s.a, s.b FROM s3object s LIMIT 5")
.expressionType(ExpressionType.SQL)
.inputSerialization(inputSerialization)
.outputSerialization(outputSerialization)
.build();
final TestHandler handler = new TestHandler();
client.selectObjectContent(select, handler).get();
RecordsEvent response = (RecordsEvent) handler.receivedEvents.stream()
.filter(e -> e.sdkEventType() == SelectObjectContentEventStream.EventType.RECORDS)
.findFirst()
.orElse(null);
System.out.println(response.payload().asUtf8String());
}
private static class TestHandler implements SelectObjectContentResponseHandler {
private SelectObjectContentResponse response;
private List<SelectObjectContentEventStream> receivedEvents = new ArrayList<>();
private Throwable exception;
#Override
public void responseReceived(SelectObjectContentResponse response) {
this.response = response;
}
#Override
public void onEventStream(SdkPublisher<SelectObjectContentEventStream> publisher) {
publisher.subscribe(receivedEvents::add);
}
#Override
public void exceptionOccurred(Throwable throwable) {
exception = throwable;
}
#Override
public void complete() {
}
}
}
As you see, it's possible to make S3 selects programmatically!
You might be wondering what are those #AWSClient and #ExtendWith( S3.class )?
This is a small library to inject AWS clients in your tests, named aws-junit5. It would greatly simplify your tests. I am the author. The usage is really simple — try it in your next project!
With this I have added the console.log functionality to the javax.script.ScriptEngine.
public class Console {
public void log(String text){
System.out.println("console: " + text);
}
}
private static ScriptEngine getJavaScriptEngine(){
ScriptEngine engine = new ScriptEngineManager().getEngineByName("JavaScript");
Console console = new Console();
engine.put("console", console);
return engine;
}
As console and alert etc. are not part of the implementation. After a lot of searching I only found here and eleswhere only the same statement but wondering if there is not a library which does this right ?
I had a similar solution, but found that it would not format objects / arrays. Nor that it would handle multiple arguments (e.g. console.log('this is the answer:', 42)).
To work around this, I had to polyfill console. It doesn't support all of its functions, but it will do for the purpose of logging. To format objects and arrays as JSON, it appeared that the JSON object wasn't available too - so had to polyfill that object too.
There is probably a prettier solution to this, but this setup got me going:
Polyfill the JSON object, take the script mentioned here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON#Polyfill. Rhino doesn't know about window, so replace:
if (!window.JSON) {
window.JSON = {
// ... rest of code ...
};
}
by:
// if (!window.JSON) {
// window.
JSON = {
// ... rest of code ...
};
// }
Save it under src/main/resources/scripts/json.js.
Then, for the console object, put the following content in src/main/resources/scripts/console.js:
console = {
_format: function(values) {
var msg = [];
for (var i=0;i<values.length;i++)
msg.push(JSON.stringify(values[i]));
return msg.join(', ');
},
log: function() {
log.fine(console._format(arguments));
},
info: function() {
log.info(console._format(arguments));
},
warn: function() {
log.warning(console._format(arguments));
}
};
Note that this console implementation uses a log global variable. In my case, this is a JUL Logger instance, but with a little bit of creativity it can be changed to use another logging framework (e.g. SLF4j).
Finally, to put this all together in Java code, it can be done like this:
public class ConsoleTest {
public static void main(String... args) throws ScriptException, IOException {
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine engine = factory.getEngineByName("JavaScript");
engine.put("log", Logger.getLogger("script"));
run(engine, "/scripts/json.js");
run(engine, "/scripts/console.js");
engine.eval("console.log('string value');");
engine.eval("console.warn(['array','value']);");
engine.eval("console.info({a:1,b:'two'});");
}
private void run(ScriptEngine engine, String resourceName) throws ScriptException, IOException {
InputStream in = getClass().getResourceAsStream(resourceName);
Reader reader = new InputStreamReader(in, Charset.forName("UTF-8"));
engine.eval(reader);
reader.close();
}
}
Where can I find Jira issue type values that we pass to IssueBuilder class constructor?
For ex: If i want to create a issue type of bug using jira rest api , We pass value '1L' to Issue Builder class constructor.
IssueInputBuilder issueBuilder = new IssueInputBuilder("Key", 1l);
Similarly what are the values of other jira issue types ?.. Anybody know the values we need to pass ?
If you are using later Jira REST Java Client API (e.g. 4.0), the interface has been changed. You must use following code to browsing all issue types:
private static final String JIRA_SERVER = "http://jiralab";
public static void main(String[] args) {
try {
JiraRestClientFactory factory = new AsynchronousJiraRestClientFactory();
URI uri = new URI(JIRA_SERVER);
JiraRestClient client = factory.createWithBasicHttpAuthentication(uri, "admin", "admin");
listAllIssueTypes(client);
}
catch (Exception ex) {
}
}
private static void listAllIssueTypes(JiraRestClient client) throws Exception {
Promise<Iterable<IssueType>> promise = client.getMetadataClient().getIssueTypes();
Iterable<IssueType> issueTypes = promise.claim();
for (IssueType it : issueTypes) {
System.out.println("Type ID = " + it.getId() + ", Name = " + it.getName());
}
}
If you want to get a list of all available issuetypes, you can use the REST API (/rest/api/2/issuetype). To try that on your JIRA instance, I like to recommend the Atlassian REST API Browser.
Or just look here: Finding the Id for Issue Types
In Java you can get a list of all issuetype object using getAllIssueTypeObjects().