After running my load test Jmeter generate result onto "summary.csv".
Some urls in this file looks like:
1482255989405,3359,POST ...users/G0356GM7QOITIMGA/...
1482255989479,3310,POST ...users/HRC50JG3T524N9RN/...
1482255989488,3354,POST ...users/54QEGZB54BEWOCJJ/...
Where "...users/G0356GM7QOITIMGA/..." - its URL column.
After that I try to generate jmeter-report using this command:
jmeter -g summary.csv -o report
Howewer this action throw Out of memory exception (because of many different URLs).
So I decide to edit summary.csv in tearDown Thread Group and replace all ID to "someID" string, using BeanShell Sampler:
import java.io.*;
import org.apache.jmeter.services.FileServer;
try {
String sep = System.getProperty("line.separator");
String summaryFileDirPath = FileServer.getFileServer().getBaseDir() + File.separator;
String summaryFilePath = summaryFileDirPath + "summary.csv";
log.info("read " + summaryFilePath);
File file = new File(summaryFilePath);
BufferedReader reader = new BufferedReader(new FileReader(file));
String line;
String text = "";
while ((line = reader.readLine()) != null) {
text += line + sep;
}
reader.close();
log.info(summaryFilePath);
file.delete();
FileWriter writer = new FileWriter(summaryFileDirPath + "summary.csv", false);
writer.write(text.replaceAll("users/[A-Z0-9]*/", "users/EUCI/"));
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
Result:summary.csv screen
Seems like Jmeter append some rows after tearDown Thread Group ends his work.
How can I edit summary.csv file after test run using only jmeter script?
PS: I need collect result only in summary.csv
There is a JMeter Property - jmeter.save.saveservice.autoflush, most probably you are suffering from its default value of false
# AutoFlush on each line written in XML or CSV output
# Setting this to true will result in less test results data loss in case of Crash
# but with impact on performances, particularly for intensive tests (low or no pauses)
# Since JMeter 2.10, this is false by default
#jmeter.save.saveservice.autoflush=false
You can override the value in at least 2 ways:
Add the next line to user.properties file:
jmeter.save.saveservice.autoflush=true
Pass it to JMeter via -J command-line argument like:
jmeter -Jjmeter.save.saveservice.autoflush=true -n -t ....
See Apache JMeter Properties Customization Guide article for comprehensive information on JMeter Properties and ways of working with them
Related
What is the difference between those two queries:
SELECT my_fun(col_name) FROM my_table;
and
CREATE TABLE new_table AS SELECT my_fun(col_name) FROM my_table;
Where my_fun is a java UDF.
I'm asking, because when I create new table (second query) I receive a java error.
Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed
...
Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: Unable to instantiate UDF implementation class com.company_name.examples.ExampleUDF: java.lang.NullPointerException
I found that the source of error is line in my java file:
encoded = Files.readAllBytes(Paths.get(configPath));
But the question is why it works when table is not created and fails if table is created?
The problem might be with the way you read the file. Try to pass the file path as the second argument in the UDF, then read as follows
private BufferedReader getReaderFor(String filePath) throws HiveException {
try {
Path fullFilePath = FileSystems.getDefault().getPath(filePath);
Path fileName = fullFilePath.getFileName();
if (Files.exists(fileName)) {
return Files.newBufferedReader(fileName, Charset.defaultCharset());
}
else
if (Files.exists(fullFilePath)) {
return Files.newBufferedReader(fullFilePath, Charset.defaultCharset());
}
else {
throw new HiveException("Could not find \"" + fileName + "\" or \"" + fullFilePath + "\" in inersect_file() UDF.");
}
}
catch(IOException exception) {
throw new HiveException(exception);
}
}
private void loadFromFile(String filePath) throws HiveException {
set = new HashSet<String>();
try (BufferedReader reader = getReaderFor(filePath)) {
String line;
while((line = reader.readLine()) != null) {
set.add(line);
}
} catch (IOException e) {
throw new HiveException(e);
}
}
The full code for different generic UDF that utilizes file reader can be found here
I think there are several points unclear, so this answer is based on assumptions.
First of all, it is important to understand that hive currently optimize several simple queries and depending on the size of your data, the query that is working for you SELECT my_fun(col_name) FROM my_table; is most likely running locally from the client where you are executing the job, that is why you UDF can access your config file locally available, this "execution mode" is because the size of your data. CTAS trigger a job independent on the input data, this job runs distributed in the cluster where each worker fail accessing your config file.
It looks like you are trying to read your configuration file from the local file system, not from the HDSFS Files.readAllBytes(Paths.get(configPath)), this means that your configuration has to either be replicated in all the worker nodes or be added previously to the distributed cache (you can use add file from this, doc here. You can find another questions here about accessing files from the distributed cache from UDFs.
One additional problem is that you are passing the location of your config file through an environment variable which is not propagated to worker nodes as part of your hive job. You should pass this configuration as a hive config, there is an answer for accessing Hive Config from UDF here assuming that you are extending GenericUDF.
I need to figure out a way to load content from a file containing list of ids in the preprocessing step in Jmeter. This needs to happen only once and not every time for each request. So it should be like -
Load all the list of static ids from the file once.
For every request pick one id randomly from this list.
POST the request
I am trying to explore JSR223 preprocessor but not much luck so far. Also I am not sure whether the preprocessor executes for every request which I do not want.
My current JSR Preprocessor looks something like the following -
import java.util.*;
import java.io.*;
try {
Random generator = new Random();
List<String> uuids = new ArrayList<String>();
int n = 1000;
try(BufferedReader br = new BufferedReader(new FileReader("/uuids.txt"))) {
String line = br.readLine();
while (line != null) {
uuids.add(line);
line = br.readLine();
}
}
int rn = uuids.get(generator.nextInt(n));
vars.put("some_file", "/files/" + uuids.get(rn) + ".json.gz");
} catch (Throwable ex) {
log.error("Something went wrong", ex);
throw ex;
}```
Your approach is a little bit wrong because:
JSR223 PreProcessor is executed before each request in its scope
JSR223 PreProcessor is executed by each thread (virtual user)
So I would recommend the following enhancement:
Add setUp Thread Group to your test plan
Add JSR223 Sampler to it with the following code:
SampleResult.setIgnore()
props.put('uuids', new File('uuids.txt').readLines())
this will let you read the file only once and only by one thread.
Whenever you want to access a random uuid you can use the following __groovy() function:
${__groovy(props.get('uuids').get(org.apache.commons.lang3.RandomUtils.nextInt(0\,props.get('uuids').size())),)}
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It
You can use instead JMeter's plugin bzm - Random CSV Data Set Config
Just input the CSV filename and it will generate random uuid every time
Here is my code:
Process p = Runtime.getRuntime().exec(new String[]{"bash","-c",new String(command.getBytes(),"utf-8")});
I found out that there is no use of new String(command.getBytes(),"utf-8").
How can I to set charset?
My app is a spring boot application.
The detail command is
./xxx.jar --execute "select * from xxx where a = `我`"
When I execute the command directly in the shell, it runs well, but the java code gets garbled.
I set -Dfile.encoding=UTF-8,but it is no use for me. Why?
I found out that there is no use of new String(command.getBytes(),"utf-8").
This isn't accurate. Below is an example showing different character sets (ASCII and UTF-8) to run the same command using exec(), and the output is pretty clearly affected by the character set.
This program:
takes a single input parameter,
runs touch to create two files at /tmp/charset-test/ using that input value in the filename
further, if the input is a UTF-8 value, it should create a file with the UTF-8 value in the filename
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
public class CharsetTest {
public static void main(String[] args) throws IOException {
String input = args[0];
System.out.println("input: " + input);
Charset[] charsets = {StandardCharsets.US_ASCII, StandardCharsets.UTF_8};
for (Charset charset : charsets) {
String command = "touch /tmp/charset-test/" + input + "-" + charset.toString() + ".txt";
System.out.println("command: " + command);
// this is identical to your code, but:
// - use Charsets instead of "utf-8" so I can interate; "utf-8" also works
// - skip assigning to "Process p"
Runtime.getRuntime().exec(new String[]{
"bash", "-c", new String(command.getBytes(), charset)
});
}
}
}
If I run with ASCII input "simple", it creates two files, one for each charset: "simple-US-ASCII.txt" and "simple-UTF-8.txt". This isn't all that interesting, but shows both charsets work normally with basic (ASCII) input.
% rm /tmp/charset-test/*.txt && java CharsetTest.java simple
input: simple
command: touch /tmp/charset-test/simple-US-ASCII.txt
command: touch /tmp/charset-test/simple-UTF-8.txt
% ls /tmp/charset-test
simple-US-ASCII.txt simple-UTF-8.txt
If input changes to "我", then the ASCII charset handling results in the same "garbled" output you describe ("���-US-ASCII.txt"), whereas the UTF-8 version looks good ("我-UTF-8.txt"):
% rm /tmp/charset-test/*.txt && java CharsetTest.java 我
input: 我
command: touch /tmp/charset-test/我-US-ASCII.txt
command: touch /tmp/charset-test/我-UTF-8.txt
% ls /tmp/charset-test
我-UTF-8.txt ���-US-ASCII.txt
All of this to say: your code looks fine, it's doing the right thing to pass the charset to the Runtime.exec() call. I can't say what the proper solution would be, but it's likely something with the environment (not your code).
I'm trying to convert pdf to txt by using Java. I've tried Apache PDFBox but, for some weird reason, it doesn't convert the whole document. For this reason I decided to use pdftotext by executing a Runtime.getRuntime().exec() call. The problem is that, while on my terminal pdftotext works flawlessly, the exec() call gives me error code 1 (sometimes even 99).
Here's the call:
pdftotext "/home/www-data/CANEFS_TEST/Hello/ciao.pdf" "/tmp/ciao.pdf.txt"
Here's the code
private static File callPDF2Text(File input,File output){
assert input.exists();
assert Utils.getExtension(input).equalsIgnoreCase("pdf");
assert Utils.getExtension(output).equalsIgnoreCase("txt") : output.getAbsoluteFile().toString();
Process p=null;
try {
System.out.println(String.format(
PDF2TXT_COMMAND,
input.getAbsolutePath(),
output.getAbsolutePath()));
p=Runtime.getRuntime().exec(String.format(
PDF2TXT_COMMAND,
input.getAbsolutePath(),
output.getAbsolutePath()));
p.waitFor();
if (p.exitValue()!=0){
throw new RuntimeException("exit value for pdftotext is "+p.exitValue());
}
} catch (Exception e) {
throw new RuntimeException(e);
}
return output;
}
Here's PDF2TXT_COMMAND string definition:
public static final String PDFTXT_COMMAND="pdftotext \"%s\" \"%s\"";
I know that usually these kinds of errors are caused by the permission setup. So, here 's the output of ls -l command on the Hello folder:
ls -l /home/www-data/CANEFS_TEST/Hello/
total 136
-rwxrwxr-- 1 www-data www-data 136041 mar 27 16:31 ciao.pdf
Also, note that the user creating the process is koldar, which is in the group www-data itself.
Thank you for your time and patience!
Don't use " in your format string... These chars are specially parsed by the shell and you don't use a shell to launch the command...
I can suggest you to use exec(String []) not exec(String) so that you will be able to separate each arg of your command:
String []command = new String[3];
command[0] = "pdftotext";
command[1] = input.getAbsolutePath();
command[2] = output.getAbsolutePath();
Runtime.getRuntime().exec(command);
That should work. If it doesn't, that may be a question of access rights on dir.
I am in the process of creating a test place in JMeter which visits a random amount of pages (from 2 - 10), whose URLs are to be fetched from a CSV Data Set. I have created the CSV Data Set and the samplers which are working fine, except that only one row is read from the Data Set per thread, which is not as a I need - I want a new row to be read after the sampler has completed (or before, I'm not fussed).
I saw that this question is very similar and the solution was to use the Raw Data Source Pre-Processor, which does work but requires arduous alterations to the file in question (adding chunk sizes before each line), which is a bit of a pain when the file is about 500 lines long.
Is there a way I can set the CSV Data Set to advance to the next row on reading, or use some post or pre processor, such as beanshell, in order to do this? I have seen people state that CSVRead can do this, but that states that access is per-thread, which would be no good for me.
As a side note - ultimately all I want to do is access a random line in the file which gets passed to a HTTP sampler, if there is an easier or better way to do this I'm open to suggestions.
You can possibly use for this beanshell (= java) code executed from BeanShell Sampler / BeanShell PostProcessor / BeanShell PreProcessor.
The following code will read all the lines from your file and then select single random:
import java.text.*;
import java.io.*;
import java.util.*;
String [] params = Parameters.split(",");
String csvTest = params[0];
String csvDir = params[0];
ArrayList strList = new ArrayList();
try {
File file = new File(System.getProperty("user.dir") + File.separator + csvDir + File.separator + csvTest);
if (!file.exists()) {
throw new Exception ("ERROR: file " + csvTest + " not found in " + csvDir + " directory.");
}
BufferedReader bufRdr = new BufferedReader(new FileReader(file));
String line = null;
while((line = bufRdr.readLine()) != null) {
strList.add(line);
}
bufRdr.close();
Random rnd = new java.util.Random();
vars.put("csvUrl",strList.get(rnd.nextInt(strList.size())));
}
catch (Exception ex) {
IsSuccess = false;
log.error(ex.getMessage());
System.err.println(ex.getMessage());
}
catch (Throwable thex) {
System.err.println(thex.getMessage());
}
Then you can access extracted URL via variable (${csvUrl} in this example).
I doubt only that reading full file on each iteration (if you have to execute this in loop) is good solution from performance point of view.