I have an Environment enum and an Application enum. Each application also has its own class which has a test for that app. I want to run each test in all the environments before it goes on to the next test. Here is part of what I have in the main method
for(Environment env :Environment.values())
{
new AccountInventory(env);
AccountInventory.accountInventoryTests(null, env);
new AuditActionItems( env);
AuditActionItems.auditActionItemTests(null, env);
new SalesPipeline(env);
SalesPipeline.salesPipelineTests(null, env);
Here is an example of what I have in a class
public static boolean accountInventoryTests(Application app, Environment env)
{
WebDriver driver = new InternetExplorerDriver();
try{
driver.get(env.getDomain() + Application.ACCOUNTINVENTORY.getContextRoot());
driver.findElement(By.name("j_username")).sendKeys(USER);
driver.findElement(By.name("j_password")).sendKeys(PASSWORD);
driver.findElement(By.cssSelector("input[type='submit']")).click();
Right now it runs all the tests in one environment, then runs all of them in the next environment. Thanks in advance.
Your code looks like that you doesn't need to iterate your application-enum for your test, because you access them in your test-methods(see Application.ACCOUNTINVENTORY.getContextRoot()).
If you realy want to iterate your enum, you can try mabye this:
for(Environment env :Environment.values())
{
new AccountInventory(env);
for(Application app: Application.values())
{
AccountInventory.accountInventoryTests(app, env);
}
new AuditActionItems( env);
for(Application app: Application.values())
{
AuditActionItems.auditActionItemTests(app, env);
}
...
}
Hope it helps
Related
I am using Vertx and trying to test some parameters that i am getting data from jsonfile, currently it works but i want get this file just through class path so it can be tested from a different computer.
private ConfigRetriever getConfigRetriever() {
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file").setOptional(true)
.setConfig(new JsonObject()
.put("path", "/home/user/MyProjects/MicroserviceBoilerPlate/src/test/resources/local_file.json"));
ConfigStoreOptions sysPropsStore = new ConfigStoreOptions().setType("sys");
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore).addStore(sysPropsStore);
return ConfigRetriever.create(Vertx.vertx(), options);
}
My path as written above starts from /home / dir which makes it impossible to be tested on another machine. My test below uses this config
#Test
public void tourTypes() {
ConfigRetriever retriever = getConfigRetriever();
retriever.getConfig(ar -> {
if (ar.failed()) {
// Failed to retrieve the configuration
} else {
JsonObject config = ar.result();
List<String> extractedIds = YubiParserServiceCustomImplTest.getQueryParameters(config, "tourTypes");
assertEquals(asList("1", "2", "3", "6"), extractedIds);
}
});
}
I want to make the path a class path so i can test it on all environment.
I tried to access class path like this but not sure how it should be
private void fileFinder() {
Path p1 = Paths.get("/test/resources/local_file.json");
Path fileName = p1.getFileName();
}
If you have stored the file inside "src/test/resources" then you can use
InputStream confFile = getClass().getResourceAsStream("/local_file.json");
or
URL url = getClass().getResource("/local_file.json");
inside your test class (example)
IMPORTANT!
In both cases the file names can start with a / or not. If it does, it starts at the root of the classpath. If not, it starts at the package of the class on which the method is called.
Put .json file to /resources folder of your project (here an example).
Then access it via ClassLoader.getResourceAsStream:
InputStream configFile = ClassLoader.getResourceAsStream("path/to/file.json");
JsonObject config = new JsonParser().parse(configFile);
// Then provide this config to Vertx
As I understand, considering the location of your json file, you simply need to do this:
.setConfig(new JsonObject().put("path", "local_file.json"));
See this for reference.
I'm writing a UDF for Pig using Java. It works fine but Pig doesn't give me options to separate environment. What my Pig script is doing is to get Geo location from IP address.
Here's my code on the Geo location part.
private static final String GEO_DB = "GeoLite2-City.mmdb";
private static final String GEO_FILE = "/geo/" + GEO_DB;
public Map<String, Object> geoData(String ipStr) {
Map<String, Object> geoMap = new HashMap<String, Object>();
DatabaseReader reader = new DatabaseReader.Builder(new File(GEO_DB)).build();
// other stuff
}
GeoLite2-City.mmdb exists in HDFS that's why I can refer from absolute path using /geo/GeoLite2-City.mmdb.
However, I can't do that from my JUnit test or I have to create /geo/GeoLite2-City.mmdb on my local machine and Jenkins which is not ideal. I'm trying to figure out a way to make my test passed while using new File(GEO_DB) and not
getClass().getResourceAsStream('./geo/GeoLite2-City.mmdb') because
getClass().getResourceAsStream('./geo/GeoLite2-City.mmdb')
Doesn't work in Hadoop.
And if I run Junit test it would fail because I don't have /geo/GeoLite2-City.mmdb on my local machine.
Is there anyway I can overcome this? I just want my tests to pass without changing the code to be using getClass().getResourceAsStream and I can't if/else around that because Pig doesn't give me a way to pass in parameter or maybe I'm missing something.
And this is my JUnit test
#Test
#Ignore
public void shouldGetGeoData() throws Exception {
String ipTest = "128.101.101.101";
Map<String, Object> geoJson = new LogLine2Json().geoData(ipTest);
assertThat(geoJson.get("lLa").toString(), is(equalTo("44.9759")));
assertThat(geoJson.get("lLo").toString(), is(equalTo("-93.2166")));
}
which it works if I read the database file from resource folder. That's why I have #Ignore
Besides, your whole code looks pretty un-testable.
Every time when you directly call new in your production code, you prevent dependency injection; and thereby you make it much harder to test your code.
The point is to not call new File() within your production code.
Instead, you could use a factory that gives you a "ready to use" DatabaseReader object. Then you can test your factory to do the right thing; and you can mock that factory when testing this code (to return a mocked database reader).
So, that one file instance is just the top of your "testing problems" here.
Honestly: don't write production code first. Do TDD: write test cases first; and you will quickly learn that such production code that you are presenting here is really hard to test. And when you apply TDD, you start from "test perspective", and you will create production code that is really testable.
You have to make the file location configurable. E.g. inject it via constructor. E.g. you could create a non-default constructor for testing only.
public class LogLine2Json {
private static final String DEFAULT_GEO_DB = "GeoLite2-City.mmdb";
private static final String DEFAULT_GEO_FILE = "/geo/" + GEO_DB;
private final String geoFile;
public LogLine2Json() {
this(DEFAULT_GEO_FILE);
}
LogLine2Json(String geoFile) {
this.geoFile = geoFile;
}
public Map<String, Object> geoData(String ipStr) {
Map<String, Object> geoMap = new HashMap<String, Object>();
File file = new File(geoFile);
DatabaseReader reader = new DatabaseReader.Builder(file).build();
// other stuff
}
}
Now you can create a file from the resource and use this file in your test.
public class LogLine2JsonTest {
#Rule
public final TemporaryFolder folder = new TemporaryFolder();
#Test
public void shouldGetGeoData() throws Exception {
File dbFile = copyResourceToFile("/geo/GeoLite2-City.mmdb");
String ipTest = "128.101.101.101";
LogLine2Json logLine2Json = new LogLine2Json(dbFile.getAbsolutePath())
Map<String, Object> geoJson = logLine2Json.geoData(ipTest);
assertThat(geoJson.get("lLa").toString(), is(equalTo("44.9759")));
assertThat(geoJson.get("lLo").toString(), is(equalTo("-93.2166")));
}
private File copyResourceToFile(String name) throws IOException {
InputStream resource = getClass().getResourceAsStream(name);
File file = folder.newFile();
Files.copy(resource, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
return file;
}
}
TemporaryFolder is a JUnit rule that deletes every file that is created during test afterwards.
You may modify the asserts by using the hasToString matcher. This will give you more detailed information in case of a failing test. (And you have to read/write less code.)
assertThat(geoJson.get("lLa"), hasToString("44.9759"));
assertThat(geoJson.get("lLo"), hasToString("-93.2166"));
You don't. Your question embodies a contradiction in terms. Resources are not files and do not live in the file system. You can either distribute the file separately from the JAR and use it as a File or include it in the JAR and use it as a resource. Not both. You have to make up your mind.
I'm trying to run a groovy(2.4.3) script on windows that calls a goovy class xxxxx.groovy. I've tried a number of variations using classpath and various scripts, some examples below, always getting MultipleCompliationErrorsException.... unable to resolve class
classfile is firstclass.groovy
import org.apache.commons.io.FilenameUtils
class firstclassstart {
def wluid, wlpwd, wlserver, port
private wlconnection, connectString, jmxConnector, Filpath, Filpass, Filname, OSRPDpath, Passphrase
// object constructor
firstclassstart(wluid, wlpwd, wlserver, port) {
this.wluid = wluid
this.wlpwd = wlpwd
this.wlserver = wlserver
this.port = port
}
def isFile(Filpath) {
// Create a File object representing the folder 'A/B'
def folder = new File(Filpath)
if (!org.apache.commons.io.FilenameUtils.isExtension(Filpath, "txt")) {
println "bad extension"
return false
} else if (!folder.exists()) {
// Create all folders up-to and including B
println " path is wrong"
return false
} else
println "file found"
return true
}
}
cmd line script test.groovy
import firstclass
def sample = new firstclass.firstclassstart("weblogic", "Admin123", "x.com", "7002")
//def sample = new firstclassstart("weblogic", "Admin123", "x.com", "7002")
sample.isFile("./firstclass.groovy")
..\groovy -cp "firstclass.groovy;commons-io-1.3.2.jar" testfc.groovy
script test.groovy
GroovyShell shell = new GroovyShell()
def script = shell.parse(new File('mylib/firstclass.groovy'))
firstclass sample = new script.firstclass("uid", "pwd", "url", "port")
sample.getstatus()
c:>groovy test.groovy
script test.groovy v2 put firstclass.groovy in directory test below script
import test.firstclass
firstclass sample = new script.firstclass("uid", "pwd", "url", "port")
sample.getstatus()
c:>groovy test.groovy
just looking for a bullet proof, portable way to oranize my java classes, .groovy classess, etc. and scripts.
Thanks
I think that you can do using for example your first approach:
groovy -cp mylib/firstclass.groovy mylib/test.groovy
However I see some problems in your code which are probably causing MultipleCompliationErrorsException.
Since you're including firstclass.groovy in your classpath, you've to add the import firstclass in the test.groovy.
Why are you using script.firstclass in test.groovy? you're class is called simply firstclass.
In your firstclass.groovy you're using import org.apache.commons.io.FilenameUtils and probably other, however you're not including it in the classpath.
So finally I think that, you've to change your test.groovy for something like:
import firstclass
firstclass sample = new firstclass("uid", "pwd", "url", "port")
sample.getstatus()
And in your command add the remaining includes for apache Commons IO to the classpath.
groovy -cp "mylib/firstclass.groovy;commons-io-2.4.jar;" mylib/testexe.groovy
Hope this helps,
UPDATE BASED ON OP CHANGES:
After the changes you've some things wrong, I try to enumerate it:
If your file is called firstclass.groovy your class must be class firstclass not class firstclassstart.
In your test.groovy use new firstclass not new firstclass.firstclassstart.
So the thing is, your code must be:
class file firstclass.groovy:
import org.apache.commons.io.FilenameUtils
class firstclass {
def wluid, wlpwd, wlserver, port
private wlconnection, connectString, jmxConnector, Filpath, Filpass, Filname, OSRPDpath, Passphrase
// object constructor
firstclass(wluid, wlpwd, wlserver, port) {
this.wluid = wluid
this.wlpwd = wlpwd
this.wlserver = wlserver
this.port = port
}
def isFile(Filpath) {
// Create a File object representing the folder 'A/B'
def folder = new File(Filpath)
if (!org.apache.commons.io.FilenameUtils.isExtension(Filpath, "txt")) {
println "bad extension"
return false
} else if (!folder.exists()) {
// Create all folders up-to and including B
println " path is wrong"
return false
} else
println "file found"
return true
}
}
script test.groovy:
import firstclass
def sample = new firstclass("weblogic", "Admin123", "x.com", "7002")
sample.isFile("./firstclass.groovy")
Finally the command to execute it:
groovy -cp "firstclass.groovy;commons-io-1.3.2.jar" test.groovy
With this changes your code must works, I try it and works as expected.
Does anyone know how to clear the cache on a new start of a test while running SafariDriver? I've tried to use java robot to keypress command + option + e, but that does not seem to work. It does not focus on the browser.
Robot r = new Robot();
try {
Robot robot = new Robot();
r.keyPress(KeyEvent.META_MASK);
r.keyPress(KeyEvent.VK_META);
r.keyPress(KeyEvent.VK_E);
r.keyRelease(KeyEvent.VK_E);
r.keyRelease(KeyEvent.VK_META);
r.keyRelease(KeyEvent.META_MASK);
} catch (AWTException e) {
e.printStackTrace();
}
Ive also tried to do an actions.builder method but that does not seem to work
String clearCache = Keys.chord(Keys.CONTROL, Keys.COMMAND, "E");
Actions builder = new Actions(browser);
builder.sendKeys(clearCache);
Action clearCacheAction = builder.build();
clearCacheAction.perform();
I've also looked into using SafariDriver options but my java is not that good to fully understand how to implement it. Below is the code that Ive been trying to use. I created a SafariOptions Class and tried to instantiate it in my #before class.
package test
import org.openqa.selenium.safari.SafariDriver;
public class SafariOptions extends SafariDriver {
private static SafariOptions ourInstance = new SafariOptions();
public static SafariOptions getInstance() {
return ourInstance;
}
public void setUseCleanSession(boolean useCleanSession){
}
public SafariOptions() {
boolean useCleanSession = true;
}
}
#Before
public void createDriver() {
assumeTrue(isSupportedPlatform());
browser = new SafariDriver();
SafariDriver options = new SafariOptions();
}
Nothing seems to clear the Safari cache on each test run.
Quick and easy solution for all who might want to know.
Add the following code to a .sh file to root.
killall cookied
rm -rf ~/Library/Caches/com.apple.Safari/*
rm -rf ~/Library/Safari/LocalStorage/*
rm -rf ~/Library/Cookies/*
Call on the file in #Before
Runtime runtime = Runtime.getRuntime();
runtime.exec("file.sh");
System.out.println("Cookies Removed");
I'm new to java and having some trouble running an oozie job using java code. I am unable to figure out the problem in the code. Some help will be really appreciated. Here's my code
import java.util.Properties;
import org.apache.oozie.client.OozieClient;
import org.apache.oozie.client.WorkflowJob;
public class oozie {
public static void main(String[] args) {
OozieClient wc = new OozieClient("http://host:11000/oozie");
Properties conf = wc.createConfiguration();
conf.setProperty(OozieClient.APP_PATH, "hdfs://cluster/user/apps/merge-psp-logs/merge-wf/workflow.xml");
conf.setProperty("jobTracker", "jobtracker.bigdata.com:8021");
conf.setProperty("nameNode", "hdfs://namenode.bigdata.com:8020");
conf.setProperty("queueName", "jobtracker.bigdata.com:8021");
conf.setProperty("appsRoot", "hdfs://namenode.bigdata.com:8020/user/workspace/apps");
conf.setProperty("appLibLoc", "hdfs://namenode.bigdata.com:8020/user/workspace/lib");
conf.setProperty("rawlogsLoc", "hdfs://namenode.bigdata.com:8020/user/workspace/");
conf.setProperty("mergedlogsLoc", "jobtracker.bigdata.com:8021");
try {
String jobId = wc.run(conf);
System.out.println("Workflow job submitted");
while (wc.getJobInfo(jobId).getStatus() == WorkflowJob.Status.RUNNING) {
System.out.println("Workflow job running ...");
Thread.sleep(10 * 1000);
}
System.out.println("Workflow job completed ...");
System.out.println(wc.getJobInfo(jobId));
} catch (Exception r) {
System.out.println("Errors");
}
}
}
Though i am able to launch the job using command line
Without any further information, i would say this is the probably cause of your runtime errors:
conf.setProperty(OozieClient.APP_PATH,
"hdfs://cluster/user/apps/merge-psp-logs/merge-wf/workflow.xml");
conf.setProperty("jobTracker", "jobtracker.bigdata.com:8021");
conf.setProperty("nameNode", "hdfs://namenode.bigdata.com:8020");
conf.setProperty("queueName", "jobtracker.bigdata.com:8021");
Unless you have two clusters, my guess is you meant the APP_PATH to point to the same HDFS instance as the one named in your nameNode property, in which case try:
conf.setProperty(OozieClient.APP_PATH,
"hdfs://namenode.bigdata.com:8020/user/apps/merge-psp-logs/merge-wf/workflow.xml");
You might also want to change the queueName to a real queue name (probably "default", unless jobtracker.bigdata.com:8021 is the actual name of your queue):
conf.setProperty("queueName", "default");
Aside from those observations, try and post the actual runtime error you're seeing.