Unable to execute java code in Linux server - java

I have made one java API service. While Calling this service I am trying to get response from Dialogue flow. Now When I am executing the code on window platform then It is working fine.But when I am publish it ,on Linux server. And trying to test it Giving error.
I have done it with export GOOGLE_APPLICATION_CREDENTIALS="[PATH]" on system variable.Even I have setup the path on Linux server but it is not working.Then I have put .json file on project and trying to read and then Again making call. (These things are working fine only on local system not Linux after publish the project)
List<String> texts= new ArrayList<String>();
texts.add("Hi");
String sessionId="3f46dfa4-5204-84f3-1488-5556f3d6b8a1";
String languageCode="en-US";
GoogleCredentials credentials = GoogleCredentials.fromStream(new FileInputStream("E:\\GoogleDialogFlow\\ChatBox\\Json.json"))
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
SessionsSettings.Builder settingsBuilder = SessionsSettings.newBuilder();
SessionsSettings sessionsSettings = settingsBuilder.setCredentialsProvider(FixedCredentialsProvider.create(credentials)).build();
SessionsClient sessionsClient = SessionsClient.create(sessionsSettings);
SessionName session = SessionName.of(projectId, sessionId);
com.google.cloud.dialogflow.v2.TextInput.Builder textInput = TextInput.newBuilder().setText("Hi").setLanguageCode(languageCode);
QueryInput queryInput = QueryInput.newBuilder().setText(textInput).build();
DetectIntentResponse response = sessionsClient.detectIntent(session, queryInput);
QueryResult queryResult = response.getQueryResult();
System.out.println("====================");
System.out.format("Query Text: '%s'\n", queryResult.getQueryText());
System.out.format("Detected Intent: %s (confidence: %f)\n",
queryResult.getIntent().getDisplayName(), queryResult.getIntentDetectionConfidence());
System.out.format("Fulfillment Text: '%s'\n", queryResult.getFulfillmentText());
credentials.toBuilder().build();```

Related

How to delete browser network log in java

We have a requirement to perform some activity in browser and then capture the latest network log for chrome. I am trying to write a code where we can delete network log before performing a particular activity and then capture the log.
I am using this line of code for clearing the network log but it throwing a error: driver.manage().logs().get(LogType.BROWSER).getAll().clear();
How I can solve this issue? Is there a alternative way to do in java for selenium framework?
There is no inbuilt support in selenium to capture or delete the network calls made by the browser. To overcome this limitation, I have created a simple library wow-xhr using which we can get the network calls made my the browser.
WowXHR wowXhr = new WowXHR(new ChromeDriver()); //any selenium webdriver or RemoteWebDriver
WebDriver driver = wowXhr.getMockDriver();
driver.get("https://www.google.com");
driver.findElement(By.name("q")).sendKeys("selenium"); //enter value in search box
wowXHR.log().clear(); //clears all network calls
driver.findElement(By.tagName("button")).click(); //click search button
/* Get all network calls that are made after clicking search button */
List<XHRLog> logs = wowXHR.log().getXHRLogs();
logs.forEach(log -> {
Date initiatedTime = log.getInitiatedTime();
Date completedTime = log.getCompletedTime();
String method = log.getRequest().getMethod().name(); // GET or POST or PUT etc
String url = log.getRequest().getUrl();
Integer status = log.getResponse().getStatus();
String requestBody = (String) log.getRequest().getBody();
String responseBody = (String) log.getResponse().getBody();
Map<String, String> requestHeaders = log.getRequest().getHeaders();
Map<String, String> responseHeaders =
log.getResponse().getHeaders();
});

Bigtable emulator. Not find an appropriate constructor

Recently, I'm trying to develop something use Bigtable emulator with java(Spring Boot) on IntelliJ IDEA tool.
What I have done:
Bigtable emulator works well on my computer (MacOs 10.15.6).
"cbt" works normally with Bigtable emulator running on my mac as somethings like this.
I've checked that running Bigtable emulator doesn't need real gcloud credential.
I write a unit test on IEDA like this works fine.
I have added environment variable in setting like this:
My unit test code:
I. Connect init:
Configuration conf;
Connection connection = null;
conf = BigtableConfiguration.configure("fake-project", "fake-instance");
String host = "localhost";
String port = "8086";
II. Constant data going to write into table.
final byte[] TABLE_NAME = Bytes.toBytes("Hello-Bigtable");
final byte[] COLUMN_FAMILY_NAME = Bytes.toBytes("cf1");
final byte[] COLUMN_NAME = Bytes.toBytes("greeting");
final String[] GREETINGS = {
"Hello World!", "Hello Cloud Bigtable!", "Hello!!"
};
III. Connecting: (Duplicated to I.Connect init.)
Configuration conf;
Connection connection = null;
conf = BigtableConfiguration.configure("fake-project", "fake-instance");
String host = "localhost";
String port = "8086";
III. Connecting: (Edited)
if(!Strings.isNullOrEmpty(host)){
conf.set(BigtableOptionsFactory.BIGTABLE_HOST_KEY, host);
conf.set(BigtableOptionsFactory.BIGTABLE_PORT_KEY,port);
conf.set(BigtableOptionsFactory.BIGTABLE_USE_PLAINTEXT_NEGOTIATION, "true");
}
connection = BigtableConfiguration.connect(conf);
IV. Write & Read data:
Admin admin = connection.getAdmin();
Table table = connection.getTable(TableName.valueOf(TABLE_NAME));
if(!admin.tableExists(TableName.valueOf(TABLE_NAME))){
HTableDescriptor descriptor = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
descriptor.addFamily(new HColumnDescriptor(COLUMN_FAMILY_NAME));
System.out.print("Create table " + descriptor.getNameAsString());
admin.createTable(descriptor);
}
for (int i = 0; i < GREETINGS.length; i++) {
String rowKey = "greeting" + i;
Put put = new Put(Bytes.toBytes(rowKey));
put.addColumn(COLUMN_FAMILY_NAME, COLUMN_NAME, Bytes.toBytes(GREETINGS[i]));
table.put(put);
}
Scan scan = new Scan();
ResultScanner scanner = table.getScanner(scan);
for (Result row : scanner) {
byte[] valueBytes = row.getValue(COLUMN_FAMILY_NAME, COLUMN_NAME);
System.out.println('\t' + Bytes.toString(valueBytes));
}
V. Output
Hello World!
Hello Cloud Bigtable!
Hello!!
Problem came after I get this code to my project.
When I use 'debug' to run the code.
I get somethings like this
when it trying to connect bigtable:
Seems that it can't new instance base on the config i create.
Eventually, it shows me an error like
Could not find an appropriate constructor for com.google.cloud.bigtable.hbase1_x.BigtableConnection
P.S. I have tried to use command running IntelliJ IDEA. Reason I doing so is because I missing environment variable when I using unit test.
In my .zshrc:
My CMD tool is iTerm2 with oh-myzsh.
Anythings is help!!!!
Thanks lots.
It seems that you miss the constructor for the BigtableConnection: BigtableConnection(org.apache.hadoop.conf.Configuration conf)
I would suggest you trying to create a Connection object by following the steps mentioned in Google Documentation
private static Connection connection = null;
public static void connect() throws IOException {
Configuration config = BigtableConfiguration.configure(PROJECT_ID, INSTANCE_ID);
// Include the following line if you are using app profiles.
// If you do not include the following line, the connection uses the
// default app profile.
config.set(BigtableOptionsFactory.APP_PROFILE_ID_KEY, APP_PROFILE_ID);
connection = BigtableConfiguration.connect(config);
}

Redirecting Standard Output of a JAR in c#

I have a C# app that at some point needs to communicate with a JAR app. No problem, I use the following code to start the app but unfortunately, the window that should print the app's console remains black and the jar file executes in GUI mode.
If I call the same file with the same command from RUN or CMD, it works and the console registers messages from the JAR any ideas why when started from my C# app it won't register any message to the console?
[ProcessStartInfo psi32 = default(ProcessStartInfo);
Process proc1 = new Process();
string path1 = Application.StartupPath;
dynamic process3 = pat1 + "java.exe";
dynamic jar = "-jar ";
dynamic param1 = "/ssh.jar";
dynamic args3 = string.Format("{0}{1} {2}", jar, path1, param1);
psi32 = new ProcessStartInfo(process3, args3);
psi32.RedirectStandardInput = true;
psi32.UseShellExecute = false;
proc1.StartInfo = psi32;
proc1.Start();
string condeva;
using (StreamReader reader = proc1.StandardOutput)
{
message= reader.ReadToEnd();
}
if (message.Contains("failed"))
{
message.Text = "found it...";
}

How to create mbox using Java JavaMail?

How to read mail inbox using IMAP protocol and JavaMail and then use local disk to store mails. There is no documentation of mstor.
I try this way but it seems that MStorStore just read local mbox instead of creating and updating it according to the external server passed as params in connect() function. I get error: Folder [Inbox] does not exist.
Session lSession = Session.getDefaultInstance(props);
MStorStore lStore = new MStorStore(lSession , new URLName("mstor:c:/some_path/" + _mailModel.account.login));
lStore.connect(_mailModel.account.imap, _mailModel.account.login, _mailModel.account.password);
Folder lInbox = lStore.getDefaultFolder().getFolder("Inbox");
The questioin is how to create MBox from javax.mail.Store that i could read and update using Mstor.
I don't know if I am answering the right question (or answering a question at all), but, here is a method I wrote in a Scala program that takes an array of javamail Messages (acquired via imap) and writes them to a new mbox file in a directory named "mbox" in the root of my project using MStorStore. The new file is named whatever is passed in the "mboxName" parameter.
def writeToMbox(messages: Array[Message], mboxName: String) {
val mProps = System.getProperties
mProps.setProperty("mstor.mbox.metadataStrategy", "none")
val mSession = Session.getDefaultInstance(mProps)
val mStore = new MStorStore(mSession, new URLName("mstor:mbox"))
mStore.connect
val mFolder = mStore.getDefaultFolder
val localMbox = (new File("mbox", mboxName)).createNewFile
val mbox = mFolder.getFolder(mboxName)
mbox.open(Folder.READ_WRITE)
mbox.appendMessages(messages)
mbox.close(false)
mStore.close
}

AWS was not able to validate the provided access credentials

I have been trying to create Security Group using AWS SDK, but somehow it fails to authenticate it. For the specific Access Key and Secret Key, i have provided the Administrative rights, then also it fails to validate. On the other side, I tried the same credentials on AWS S3 Example, it successfully executes.
Getting following error while creating security group:
com.amazonaws.AmazonServiceException: AWS was not able to validate the provided access credentials (Service: AmazonEC2; Status Code: 401; Error Code: AuthFailure; Request ID: 1584a035-9a88-4dc7-b5e2-a8b7bde6f43c)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1077)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:9393)
at com.amazonaws.services.ec2.AmazonEC2Client.createSecurityGroup(AmazonEC2Client.java:1146)
at com.sunil.demo.ec2.SetupEC2.createSecurityGroup(SetupEC2.java:84)
at com.sunil.demo.ec2.SetupEC2.main(SetupEC2.java:25)
Here is the Java Code:
public class SetupEC2 {
AWSCredentials credentials = null;
AmazonEC2Client amazonEC2Client ;
public static void main(String[] args) {
SetupEC2 setupEC2Instance = new SetupEC2();
setupEC2Instance.init();
setupEC2Instance.createSecurityGroup();
}
public void init(){
// Intialize AWS Credentials
try {
credentials = new BasicAWSCredentials("XXXXXXXX", "XXXXXXXXX");
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (/home/sunil/.aws/credentials), and is in valid format.",
e);
}
// Initialize EC2 instance
try {
amazonEC2Client = new AmazonEC2Client(credentials);
amazonEC2Client.setEndpoint("ec2.ap-southeast-1.amazonaws.com");
amazonEC2Client.setRegion(Region.getRegion(Regions.AP_SOUTHEAST_1));
} catch (Exception e) {
e.printStackTrace();
}
}
public boolean createSecurityGroup(){
boolean securityGroupCreated = false;
String groupName = "sgec2securitygroup";
String sshIpRange = "0.0.0.0/0";
String sshprotocol = "tcp";
int sshFromPort = 22;
int sshToPort =22;
String httpIpRange = "0.0.0.0/0";
String httpProtocol = "tcp";
int httpFromPort = 80;
int httpToPort = 80;
String httpsIpRange = "0.0.0.0/0";
String httpsProtocol = "tcp";
int httpsFromPort = 443;
int httpsToProtocol = 443;
try {
CreateSecurityGroupRequest createSecurityGroupRequest = new CreateSecurityGroupRequest();
createSecurityGroupRequest.withGroupName(groupName).withDescription("Created from AWS SDK Security Group");
createSecurityGroupRequest.setRequestCredentials(credentials);
CreateSecurityGroupResult csgr = amazonEC2Client.createSecurityGroup(createSecurityGroupRequest);
String groupid = csgr.getGroupId();
System.out.println("Security Group Id : " + groupid);
System.out.println("Create Security Group Permission");
Collection<IpPermission> ips = new ArrayList<IpPermission>();
// Permission for SSH only to your ip
IpPermission ipssh = new IpPermission();
ipssh.withIpRanges(sshIpRange).withIpProtocol(sshprotocol).withFromPort(sshFromPort).withToPort(sshToPort);
ips.add(ipssh);
// Permission for HTTP, any one can access
IpPermission iphttp = new IpPermission();
iphttp.withIpRanges(httpIpRange).withIpProtocol(httpProtocol).withFromPort(httpFromPort).withToPort(httpToPort);
ips.add(iphttp);
//Permission for HTTPS, any one can accesss
IpPermission iphttps = new IpPermission();
iphttps.withIpRanges(httpsIpRange).withIpProtocol(httpsProtocol).withFromPort(httpsFromPort).withToPort(httpsToProtocol);
ips.add(iphttps);
System.out.println("Attach Owner to security group");
// Register this security group with owner
AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest = new AuthorizeSecurityGroupIngressRequest();
authorizeSecurityGroupIngressRequest.withGroupName(groupName).withIpPermissions(ips);
amazonEC2Client.authorizeSecurityGroupIngress(authorizeSecurityGroupIngressRequest);
securityGroupCreated = true;
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
securityGroupCreated = false;
}
System.out.println("securityGroupCreated: " + securityGroupCreated);
return securityGroupCreated;
}
}
Try to update your Systemtime.
When the diffrence between AWS-datetime and your datetime are too big, the credentials will not accepted.
For Debian/Ubuntu Users:
when you never set your time-zone you can do this with
sudo dpkg-reconfigure tzdata
Stop the ntp-Service, because too large time diffrences, cannot be changed by running service.
sudo /etc/init.d/ntp stop
Syncronize your time and date (-q Set the time and quit / Run only once) (-g Allow the first adjustment to be Big) (-x Slew up to 600 seconds / Adjuste also time witch large diffrences) (-n Do not fork / process will not going to background)
sudo ntpd -q -g -x -n
Restart service
sudo /etc/init.d/ntp start
check actual system-datetime
sudo date
set system-datetime to your hardware-datetime
sudo hwclock --systohc
show your hardware-datetime
sudo hwclock
you must specify the profile and the region
aws ec2 describe-instances --profile nameofyourprofile --region eu-west-1
"A client error (AuthFailure) occurred when calling the [Fill-in the blanks] operation: AWS was not able to validate the provided access credentials"
If you are confident of the validity of AWS credentials i.e. access key and secret key and corresponding profile name, your date and time being off-track is a very good culprit.
In my case, I was confident but I was wrong - I had used the wrong keys. Doesn't hurt to double check.
Let's say that you created an IAM user called "guignol". Configure "guignol" in ~/.aws/config as follows:
[profile guignol]
region = us-east-1
aws-access-key_id = AKXXXYYY...
aws-secret-key-access = ...
Install the aws cli (command level interface) if you haven't already done so. As a test, run aws ec2 describe-instances --profile guignol If you gat an error message that aws was not able to validate the credentials, run aws configure --profile guignol , enter your credentials and run the test command again.
If you put your credentials in ~/.aws/credentials then you don't need to provide a parameter to your AmazonEC2Client call. If you do this then on an EC2 instance the same code will work with Assumed STS roles.
For more info see: http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html
In my case, killing the terminal and running the command again helped
In my case I copied CDK env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN for programmatic access, but it appeared that I already had an old session token in my ~/aws/.credentials which I forgot about. Needed to remove the old tokens from the file.

Categories

Resources