I have JMeter script having many test elements like test fragments, include controllers , beanshell samplers, ssh samplers, SFTP samplers, JDBC etc. When I tried running JMX script using Java code( below) some of the test elements are getting skipped.One of the major problem is it is skipping Test fragments with out going inside another JMX script.We are running Test Fragments using include controllers which we tried all the combinations of paths.Please help to run test fragments inside JMX file using below Java code.
I tried all the paths inside JMX scripts, I added all JMeter Jars in maven repository etc.
public class Test_SM_RS_001_XML extends BaseClass {
public void Test121() throws Exception {
StandardJMeterEngine jmeter = new StandardJMeterEngine();
Summariser summer = null;
JMeterResultCollector results;
File JmxFile1 = new File(/path/to/JMX/File/test121.jmx");
HashTree testPlanTree = SaveService.loadTree(JmxFile1);
testPlanTree.getTree(JmxFile1);
jmeter.configure(testPlanTree);
String summariserName = JMeterUtils.getPropDefault("summariser.name", "TestSummary");
if (summariserName.length() > 0) {
summer = new Summariser(summariserName);
}
results = new JMeterResultCollector(summer);
testPlanTree.add(testPlanTree.getArray()[0], results);
jmeter.runTest();
while (jmeter.isActive())
{
System.out.println("StandardJMeterEngine is Active...");
Thread.sleep(3000);
}
if (results.isFailure())
{
TestAutomationLogger.error("TEST FAILED");
Assert.fail("Response Code: " + JMeterResultCollector.getResponseCode() + "\n" + "Response Message: " + JMeterResultCollector.getResponseMessage() + "\n" + "Response Data: " + JMeterResultCollector.getResponseData());
}
}
}
I expect to run Test fragments inside JMX file ,but it is not considering and Skipping.
Your test code is lacking essential bit: resolving of Module and Include controllers which need to be traversed and added to the "main" HashTree
So you need to replace this line:
testPlanTree.getTree(JmxFile1);
with these:
JMeterTreeModel treeModel = new JMeterTreeModel(new Object());
JMeterTreeNode root = (JMeterTreeNode) treeModel.getRoot();
treeModel.addSubTree(testPlanTree, root);
SearchByClass<ReplaceableController> replaceableControllers =
new SearchByClass<>(ReplaceableController.class);
testPlanTree.traverse(replaceableControllers);
Collection<ReplaceableController> replaceableControllersRes = replaceableControllers.getSearchResults();
for (ReplaceableController replaceableController : replaceableControllersRes) {
replaceableController.resolveReplacementSubTree(root);
}
HashTree clonedTree = JMeter.convertSubTree(testPlanTree, true);
and this one:
jmeter.configure(testPlanTree);
with this one:
jmeter.configure(clonedTree);
More information: Five Ways To Launch a JMeter Test without Using the JMeter GUI
Related
We have JaCoCo for coverage. Some tests spawn a new java process for which I add the jacocoagent arguments and I get the expected jacoco.exec. Each file has a different path.
i.e. -javaagent:path/jacoco.jar=destfile=path/to/output.exec
I merge those and generate a report in which they correctly show as covered from those external processes.
Later I try to use the merged.exec using the Java API but I can't get coverage on those methods to perform some internal calculations.
In some cases I found that there might be multiple class coverage records for certain line (I assume depending on how many times that particular line was executed) so I use the following methods to get the best coverage out of those:
private List<IClassCoverage> getJacocoCoverageData(ExecutionDataStore
execDataStore,
String classFile) throws IOException
{
List<IClassCoverage> result = new ArrayList<>();
logger.debug("Processing coverage for class: " + classFile);
final CoverageBuilder coverageBuilder = new CoverageBuilder();
final Analyzer analyzer = new Analyzer(execDataStore, coverageBuilder);
File file = new File(this.workspaceRoot, classFile);
logger.debug("Analyzing coverage in: " + file);
if (file.exists())
{
try (FileInputStream fis = new FileInputStream(file))
{
analyzer.analyzeClass(fis, file.getAbsolutePath());
}
Iterator<IClassCoverage> it = coverageBuilder.getClasses().iterator();
while (it.hasNext())
{
result.add(it.next());
}
}
return result;
}
private IClassCoverage getBestCoverage(List<IClassCoverage> coverage,
int workingCopyLine)
{
IClassCoverage coverageData = null;
for (IClassCoverage cc : coverage)
{
ILine temp = cc.getLine(workingCopyLine);
if (coverageData == null
|| temp.getStatus()
> coverageData.getLine(workingCopyLine).getStatus())
{
coverageData = cc;
}
}
return coverageData;
}
Somehow I only find not covered coverage data. Both the reports and the methods above look at the same merged.exec file.
This turned out to be something completely unrelated to the JaCoCo file. The code above worked fine.
After running my load test Jmeter generate result onto "summary.csv".
Some urls in this file looks like:
1482255989405,3359,POST ...users/G0356GM7QOITIMGA/...
1482255989479,3310,POST ...users/HRC50JG3T524N9RN/...
1482255989488,3354,POST ...users/54QEGZB54BEWOCJJ/...
Where "...users/G0356GM7QOITIMGA/..." - its URL column.
After that I try to generate jmeter-report using this command:
jmeter -g summary.csv -o report
Howewer this action throw Out of memory exception (because of many different URLs).
So I decide to edit summary.csv in tearDown Thread Group and replace all ID to "someID" string, using BeanShell Sampler:
import java.io.*;
import org.apache.jmeter.services.FileServer;
try {
String sep = System.getProperty("line.separator");
String summaryFileDirPath = FileServer.getFileServer().getBaseDir() + File.separator;
String summaryFilePath = summaryFileDirPath + "summary.csv";
log.info("read " + summaryFilePath);
File file = new File(summaryFilePath);
BufferedReader reader = new BufferedReader(new FileReader(file));
String line;
String text = "";
while ((line = reader.readLine()) != null) {
text += line + sep;
}
reader.close();
log.info(summaryFilePath);
file.delete();
FileWriter writer = new FileWriter(summaryFileDirPath + "summary.csv", false);
writer.write(text.replaceAll("users/[A-Z0-9]*/", "users/EUCI/"));
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
Result:summary.csv screen
Seems like Jmeter append some rows after tearDown Thread Group ends his work.
How can I edit summary.csv file after test run using only jmeter script?
PS: I need collect result only in summary.csv
There is a JMeter Property - jmeter.save.saveservice.autoflush, most probably you are suffering from its default value of false
# AutoFlush on each line written in XML or CSV output
# Setting this to true will result in less test results data loss in case of Crash
# but with impact on performances, particularly for intensive tests (low or no pauses)
# Since JMeter 2.10, this is false by default
#jmeter.save.saveservice.autoflush=false
You can override the value in at least 2 ways:
Add the next line to user.properties file:
jmeter.save.saveservice.autoflush=true
Pass it to JMeter via -J command-line argument like:
jmeter -Jjmeter.save.saveservice.autoflush=true -n -t ....
See Apache JMeter Properties Customization Guide article for comprehensive information on JMeter Properties and ways of working with them
Im using java to run jmx file which has disabled sampler. So I thought it would not run disabled sampler, but it does. This is jmx file code: as U can see enabled="false"
<HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Edit User All Parameters" enabled="false">
and Im using org.apache.jmeter.save.SaveService; class to load the file content:
File jmxFile = new File(jmxFilePath);
HashTree testPlanTree = null;
try{
testPlanTree = SaveService. loadTree(jmxFile);
}catch(IOException ex){}
any idea how can i fix this issue ?
My expectation is that your code is missing JMeter.convertSubTree() method, as per JavaDoc
convertSubTree
public static void convertSubTree(HashTree tree)
Remove disabled elements Replace the ReplaceableController with the target subtree
So make sure you call it before you run your test
Example code (just in case you still need it), pay attention to JMeter.convertSubTree(testPlanTree); line
//JMeter Home
String jmeterHome = "c:/apps/jmeter";
// JMeter Engine
StandardJMeterEngine jmeter = new StandardJMeterEngine();
// Initialize Properties, logging, locale, etc.
JMeterUtils.loadJMeterProperties(jmeterHome + "bin/jmeter.properties");
JMeterUtils.setJMeterHome(jmeterHome);
JMeterUtils.initLogging();// you can comment this line out to see extra log messages of i.e. DEBUG level
JMeterUtils.initLocale();
// Initialize JMeter SaveService
SaveService.loadProperties();
// Load existing .jmx Test Plan
HashTree testPlanTree = SaveService.loadTree(new File(jmeterHome + "bin/test.jmx"));
// Remove disabled test elements
JMeter.convertSubTree(testPlanTree);
// Add summariser
Summariser summer = null;
String summariserName = JMeterUtils.getPropDefault("summariser.name", "summary");
if (summariserName.length() > 0) {
summer = new Summariser(summariserName);
}
// Store execution results into a .jtl file
String logFile = jmeterHome + "/bin/test.jtl";
ResultCollector logger = new ResultCollector(summer);
logger.setFilename(logFile);
testPlanTree.add(testPlanTree.getArray()[0], logger);
// Run JMeter Test
jmeter.configure(testPlanTree);
jmeter.run();
See Five Ways To Launch a JMeter Test without Using the JMeter GUI article to learn more about different ways of executing JMeter test.
I get the following error
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
which makes little sense.
Reports are created using the BIRT designer within Eclipse, and we are using code to covert the reports in to PDF.
the code looks something like
final EngineConfig config = new EngineConfig();
config.setBIRTHome("./birt");
Platform.startup(config);
final IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
final HTMLRenderOption ho = new HTMLRenderOption();
ho.setImageHandler(new HTMLCompleteImageHandler());
config.setEmitterConfiguration(RenderOption.OUTPUT_FORMAT_HTML, ho);
// Create the engine.
this.engine = factory.createReportEngine(config);
final IReportRunnable report = this.engine.openReportDesign(reportName);
final IRunAndRenderTask task = this.engine.createRunAndRenderTask(report);
final RenderOption options = new HMTLRenderOption();
options.setOutputFormat(HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFormat("pdf");
final String output = reportName.replaceFirst(".rptdesign", ".xls");
final String output = name.replaceFirst(".rptdesign", "." + HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFileName( outputReporttName);
task.setRenderOption(options);
// Run the report.
task.run();
but it seems during the task.run() method, the system throws the error.
This needs to be able to run standalone, without the need of eclipse, and hopped thatt he setting of BIRT home would make it happy, but these seems to be some other connection profile i am unaware of and probably don't need.
The full error :
07-Jan-2013 14:55:31 org.eclipse.datatools.connectivity.internal.ConnectivityPlugin log
SEVERE: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
07-Jan-2013 14:55:31 org.eclipse.birt.report.engine.api.impl.EngineTask handleFatalExceptions
SEVERE: An error happened while running the report. Cause:
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getDefaultStateLocation(ConnectivityPlugin.java:155)
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getStorageLocation(ConnectivityPlugin.java:191)
at org.eclipse.datatools.connectivity.internal.ConnectionProfileMgmt.getStorageLocation(ConnectionProfileMgmt.java:1060)
at org.eclipse.datatools.connectivity.oda.profile.internal.OdaProfileFactory.defaultProfileStoreFile(OdaProfileFactory.java:170)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.defaultProfileStoreFile(OdaProfileExplorer.java:138)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.loadProfiles(OdaProfileExplorer.java:292)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.getProfileByName(OdaProfileExplorer.java:537)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getConnectionProfileImpl(ProfilePropertyProviderImpl.java:184)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getDataSourceProperties(ProfilePropertyProviderImpl.java:64)
at org.eclipse.datatools.connectivity.oda.consumer.helper.ConnectionPropertyHandler.getEffectiveProperties(ConnectionPropertyHandler.java:123)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.getEffectiveProperties(OdaConnection.java:826)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.open(OdaConnection.java:240)
at org.eclipse.birt.data.engine.odaconsumer.ConnectionManager.openConnection(ConnectionManager.java:165)
at org.eclipse.birt.data.engine.executor.DataSource.newConnection(DataSource.java:224)
at org.eclipse.birt.data.engine.executor.DataSource.open(DataSource.java:212)
at org.eclipse.birt.data.engine.impl.DataSourceRuntime.openOdiDataSource(DataSourceRuntime.java:217)
at org.eclipse.birt.data.engine.impl.QueryExecutor.openDataSource(QueryExecutor.java:407)
at org.eclipse.birt.data.engine.impl.QueryExecutor.prepareExecution(QueryExecutor.java:317)
at org.eclipse.birt.data.engine.impl.PreparedQuery.doPrepare(PreparedQuery.java:455)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.produceQueryResults(PreparedDataSourceQuery.java:190)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.execute(PreparedDataSourceQuery.java:178)
at org.eclipse.birt.data.engine.impl.PreparedOdaDSQuery.execute(PreparedOdaDSQuery.java:145)
at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.execute(DataRequestSessionImpl.java:624)
at org.eclipse.birt.report.engine.data.dte.DteDataEngine.doExecuteQuery(DteDataEngine.java:152)
at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.execute(AbstractDataEngine.java:267)
at org.eclipse.birt.report.engine.executor.ExecutionContext.executeQuery(ExecutionContext.java:1939)
at org.eclipse.birt.report.engine.executor.QueryItemExecutor.executeQuery(QueryItemExecutor.java:80)
at org.eclipse.birt.report.engine.executor.TableItemExecutor.execute(TableItemExecutor.java:62)
at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplicateItemExecutor.execute(SuppressDuplicateItemExecutor.java:43)
at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportItemExecutor.execute(WrappedReportItemExecutor.java:46)
at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:34)
at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:65)
at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92)
at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:180)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run (RunAndRenderTask.java:77)
has anyone seen this error and can point me in the right direction ?
When I had this issue then I tried two things. The first thing solved the error but then I just got to the next error.
The first thing I tried was setting the setenv.sh file to have the following line:
export CATALINA_OPTS="$CATALINA_OPTS -Djava.io.tmpdir=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir -Dorg.eclipse.datatools_workspacepath=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir/workspace_dtp"
This solution worked after I made the tmpdir and the workspace_dtp directories in my local tomcat server. This was done in response to the guidance here.
However, I just got to the next error, which was a connection profile error. I can look into it again if you need. I know how to replicate the issue.
The second thing I tried ended up solving the issue completely and had to do with our report designer selecting the wrong type of datasource in the report design process. See my post on the Eclipse BIRT forums here for the full story: post.
Basically, the report type was set to "JDBC Database Connection for Query Builder" when it should have been set to "JDBC Data Source." See the picture for reference:
Here I give you a tip that save me from that pain :
just launch eclipse with "-clean" option after installing BIRT plugins.
To be clear, my project was built from BIRT maven dependencies, and so should not use eclipse dependencies to run (except for designing reports), but ... i think there was a conflict somewhere ... especially with org.eclipse.datatools.connectivity_1.2.4.v201202041105.jar
For global understanding, you should follow the migration guide :
http://wiki.eclipse.org/Birt_3.7_Migration_Guide#Connection_Profiles
It helps using a connection profile to externalize datasource parameters.
So it's not required if you define JDBC parameters directly in report design.
I used this programmatic way to initialize worskpace directory :
#Override
public void initializeEngine() throws BirtException {
// define eclipse datatools workspace path (required)
String workspacePath = setDataToolsWorkspacePath();
// set configuration
final EngineConfig config = new EngineConfig();
config.setLogConfig(workspacePath, Level.WARNING);
// config.setResourcePath(getSqlDriverClassJarPath());
// startup OSGi framework
Platform.startup(config); // really needed ?
IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
engine.changeLogLevel(Level.WARNING);
}
private String setDataToolsWorkspacePath() {
String workspacePath = System.getProperty(DATATOOLS_WORKSPACE_PATH);
if (workspacePath == null) {
workspacePath = FilenameUtils.concat(SystemUtils.getJavaIoTmpDir().getAbsolutePath(), "workspace_dtp");
File workspaceDir = new File(workspacePath);
if (!workspaceDir.exists()) {
workspaceDir.mkdir();
}
if (!workspaceDir.canWrite()) {
workspaceDir.setWritable(true);
}
System.setProperty(DATATOOLS_WORKSPACE_PATH, workspacePath);
}
return workspacePath;
}
I also needed to force datasource parameters at runtime this way :
private void generateReportOutput(InputStream reportDesignInStream, File outputFile, OUTPUT_FORMAT outputFormat,
Map<PARAM, Object> params) throws EngineException, SemanticException {
// Open a report design
IReportRunnable design = engine.openReportDesign(reportDesignInStream);
// Use data-source properties from persistence.xml
forceDataSource(design);
// Create RunAndRender task
IRunAndRenderTask runTask = engine.createRunAndRenderTask(design);
// Use data-source from JPA persistence context
// forceDataSourceConnection(runTask);
// Define report parameters
defineReportParameters(runTask, params);
// Set render options
runTask.setRenderOption(getRenderOptions(outputFile, outputFormat, params));
// Execute task
runTask.run();
}
private void forceDataSource(IReportRunnable runableReport) throws SemanticException {
DesignElementHandle designHandle = runableReport.getDesignHandle();
Map<String, String> persistenceProperties = PersistenceUtils.getPersistenceProperties();
String dsURL = persistenceProperties.get(AvailableSettings.JDBC_URL);
String dsDatabase = StringUtils.substringAfterLast(dsURL, "/");
String dsUser = persistenceProperties.get(AvailableSettings.JDBC_USER);
String dsPass = persistenceProperties.get(AvailableSettings.JDBC_PASSWORD);
String dsDriver = persistenceProperties.get(AvailableSettings.JDBC_DRIVER);
SlotHandle dataSources = ((ReportDesignHandle) designHandle).getDataSources();
int count = dataSources.getCount();
for (int i = 0; i < count; i++) {
DesignElementHandle dsHandle = dataSources.get(i);
if (dsHandle != null && dsHandle instanceof OdaDataSourceHandle) {
// replace connection properties from persistence.xml
dsHandle.setProperty("databaseName", dsDatabase);
dsHandle.setProperty("username", dsUser);
dsHandle.setProperty("password", dsPass);
dsHandle.setProperty("URL", dsURL);
dsHandle.setProperty("driverClass", dsDriver);
dsHandle.setProperty("jarList", getSqlDriverClassJarPath());
// #SuppressWarnings("unchecked")
// List<ExtendedProperty> privateProperties = (List<ExtendedProperty>) dsHandle
// .getProperty("privateDriverProperties");
// for (ExtendedProperty extProp : privateProperties) {
// if ("odaUser".equals(extProp.getName())) {
// extProp.setValue(dsUser);
// }
// }
}
}
}
I was having the same issue
Changing the Data Source type from "JDBC Database Connection for Query Builder" to "JDBC Data Source" solved the problem for me.
I want to print the results of my JUnit tests to a .txt file.
Following is my code:
try {
//Creates html header
String breaks = "<html><center><p><h2>"+"Test Started on: "+df.format(date)+"</h2></p></center>";
//Creating two files for passing and failing a test
File pass = new File("Result_Passed-"+df.format(date)+ ".HTML");
File failed = new File("Result_Failed-"+df.format(date)+ ".HTML");
OutputStream fstreamF = new FileOutputStream(failed, true);
OutputStream fstream = new FileOutputStream(pass, true);
PrintStream p = new PrintStream(fstream);
PrintStream f= new PrintStream(fstreamF);
//appending the html code to the two files
p.append(breaks);
f.append(breaks);
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Following is my example testcase:
public void test_001_AccountWorld1() {
// Open the MS CRM form to be tested.
driver.get(crmServerUrl + "account");
nameOfIFRAME = "IFRAME_CapCRM";
PerformCRM_World1("address1_name", "address1_name", "address1_line1", "address1_postalcode", true);
assertEquals(firstLineFromForm.toString(), "");
assertEquals(secondLineFromForm.toString(), "Donaustadtstrasse Bürohaus 1/2 . St");
assertEquals(postcodeFromForm.toString(), "1220");
}
I've tried p.append() but doesn't work. Help please.
In general , you can redirect your output to file as follows :
- if you are using eclipse :
Run configuration-->Commons-->OutputFile-->Your file name
If you run form the command line , just use :
java ..... >output.txt
You're probably re-inventing the wheel here. ANT, Maven, X build tool or your CI server should be doing this for you.
When I am looking to do this, I run it command line, with a custom runner, running a custom suite. Very simple, almost no code. The suite just has the test you want to run, and the runner is below.. You can see the logic there for printing out. My code just prints out errors, but you can adapt this easily to print everything to file. Essentially you are just looking in the result object collection of failures and successes.
public class UnitTestRunner {
static JUnitCore junitCore;
static Class<?> testClasses;
public static void main(String[] args) {
System.out.println("Running Junit Test Suite.");
Result result = JUnitCore.runClasses(TestSuite.class);
for (Failure failure : result.getFailures()) {
System.out.println(failure.toString());
}
System.out.println("Successful: " + result.wasSuccessful() +
" ran " + result.getRunCount() +" tests");
}
}
I believe this functionality already exists. Read this part of JUnit's FAQ.