EOFException in JMeter - java

If I'm using 2 users for a thread group, first 2 test data are captured through CSV Data Set Config in the 1st iteration, but the next test data are not captured by JMeter in the next consecutive iterations in the playback time. And EOFException is displayed in jmeter log. Can anyone provide me any solution for it ?
Jmeter log:
*2014/12/16 03:05:23 WARN - jmeter.threads.JMeterThread: The delay timer was interrupted - probably did not wait as long as intended.
2014/12/16 03:05:23 ERROR - jmeter.protocol.http.sampler.HTTPJavaImpl: readResponse: java.io.EOFException
2014/12/16 03:05:23 INFO - jmeter.protocol.http.sampler.HTTPJavaImpl: Error Response Code: 200, Server sent no Errorpage
2014/12/16 03:05:23 ERROR - jmeter.protocol.http.sampler.HTTPJavaImpl: readResponse: java.io.EOFException
2014/12/16 03:05:23 INFO - jmeter.protocol.http.sampler.HTTPJavaImpl: Error Response Code: 200, Server sent no Errorpage*

There are 2 possible reasons:
You're using wrong path to CSV file (the most frequent cause is using relative CSV file path without being sure in current JMeters working directory). Solution is: use full paths where possible.
You have "Recycle on EOF" set to "false" in your CSV Data Set Config
See Using CSV DATA SET CONFIG guide for more details on where to place and how to properly configure CSV Data Set Config test element.

Related

How to get kafka offsets and the time when these offsets/records were created

I am basically trying to query offset info using bin\kafka-run-class kafka.tools.GetOffsetShell
i am providing broker list and topic info as argument.
Ref - GetOffsetShell Utility
But I am facing - Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Here are two SOF posts that touch base this same topic -
Retrieve Timestamp based data from Kafka and
kafka-run-class throwing java.lang.OutOfMemoryError error.
So this question is not about kafka-run-class kafka.tools.GetOffsetShell. So i want to check what other tools are available to query kafka offset info - information like offset and the time when the offset was created.
kafka-consumer-groups.sh does not help my usecase as it does not provide/allow to get timestamp of when the record/offset was created.
Note: I suppose we are using confluent kafka - 5.3.x with SSL/JAAS auth mechnasim

Apache Camel downloads some files incompletely from SFTP

I've been struggling to get to the bottom of why it is that some files are not correctly downloaded.
It seems like certain files just won't be downloaded fully, even when testing locally and restarting my application.
To make matters more difficult it is not always consistent.
Info:
Apache Camel version: 2.20.0
Integrated into Spring-Boot application using the camel-spring-boot-starter
Files are about 190M
Files download ok using standalone Jsch and Linux sftp client
Heap size set to 1G and memory usage doesn't even get close to the max
Camel doesn't detect anything wrong with the download, even if number of bytes written is tens of megabytes less than the length of the file according to camel headers (camel headers have correct file length)
I've observed the issue with org.apache.camel logging set to TRACE without seeing anything strange in the logs.
Idemoptent repo is updated as if the file was processed correctly
I see the same issue on Linux and Windows
Any advise on what the issue might be or suggestions for how to troubleshoot would be awesome!
Route config (a bit artificially created since values come from spring-boot config):
public class FileRouteBuilder extends RouteBuilder {
// Cut
#Override
public void configure() throws Exception {
errorHandler(deadLetterChannel("seda:"+ROUTE_ID_ERROR_EMAIL));
from("sftp://username#hostname/OUT?noop=true&streamDownload=true&password=password&include=Data_file.*csv&idempotentRepository=#keyRepo&greedy=true&delay=5m&maxMessagesPerPoll=10&readLock=changed")
.id(routeConfig.getRouteId())
.routeDescription(routeConfig.getRouteId())
.setHeader(HEADER_FILE_SOURCE, constant(routeConfig.getRouteId()))
.to("log:feeds." + routeConfig.getRouteId() + "?level=INFO&showAll=true")
// Exclude all files oder than the specified number of hours
.filter(new FileModifiedSincePredicate(24))
.to(file:rootDir/DATA)
.to("seda:" + ROUTE_ID_ACTIVITY_EMAIL_NOTIFICATION)
.end();
}
}
}
Update1
Observations after adding binary=true.
First two files are downloaded correctly but the 3rd and final file on the server is not.
193255587 Data_File_12.csv
191072548 Data_File_15.csv
139929360 Data_File_16.csv
The correct file size of teh Data_FIle_16.csv file is 192867682 bytes, which is captured correctly in the the CamelFileLength header.
Update 2
Removed all the log and seda email components above, and re-ran.
The third file still doesn't get completely written.
Adding the relevant DEBUG level log output in the hope that it sheds some light on what is going on or perhaps rules out certain things.
From what I can tell the log doesn't show anything suspicious and there is not hint that the _16 file is incompletely written.
Is there anything which could be happening on the SFTP server that anyone is aware of that it is worth checking with the provider?
o.a.c.c.file.remote.SftpConsumer : Took 0.194 seconds to poll: OUT
o.a.c.c.file.remote.SftpConsumer : Total 3 files to consume
o.a.c.c.file.remote.SftpConsumer : About to process file: RemoteFile[Data_File_12.csv] using exchange: Exchange[]
o.apache.camel.processor.SendProcessor : >>>> file://target/file-dest/MISA Exchange[ID-LON-2016-1516204084378-0-1]
o.a.camel.component.file.FileOperations : Using InputStream to write file: target\file-dest\MISA\Data_File_12.csv
o.a.camel.converter.jaxp.XmlConverter : Created TransformerFactory: com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl#d9dfe93
o.a.c.c.file.GenericFileProducer : Wrote [target\file-dest\MISA\Data_File_12.csv] to [file://target/file-dest/MISA]
o.a.c.c.file.GenericFileOnCompletion : Done processing file: RemoteFile[Data_File_12.csv] using exchange: Exchange[ID-LON-2016-1516204084378-0-1]
o.a.c.p.i.FileIdempotentRepository : Appending Data_File_12.csv-193255587 to idempotent filestore: target\file-dest\.file-key-repo\repo
o.a.c.c.file.remote.SftpConsumer : About to process file: RemoteFile[Data_File_15.csv] using exchange: Exchange[]
o.apache.camel.processor.SendProcessor : >>>> file://target/file-dest/MISA Exchange[ID-LON-2016-1516204084378-0-2]
o.a.camel.component.file.FileOperations : Using InputStream to write file: target\file-dest\MISA\Data_File_15.csv
o.a.c.c.file.GenericFileProducer : Wrote [target\file-dest\MISA\Data_File_15.csv] to [file://target/file-dest/MISA]
o.a.c.c.file.GenericFileOnCompletion : Done processing file: RemoteFile[Data_File_15.csv] using exchange: Exchange[ID-LON-2016-1516204084378-0-2]
o.a.c.p.i.FileIdempotentRepository : Appending Data_File_15.csv-191072548 to idempotent filestore: target\file-dest\.file-key-repo\repo
o.a.c.c.file.remote.SftpConsumer : About to process file: RemoteFile[Data_File_16.csv] using exchange: Exchange[]
o.apache.camel.processor.SendProcessor : >>>> file://target/file-dest/MISA Exchange[ID-LON-2016-1516204084378-0-3]
o.a.camel.component.file.FileOperations : Using InputStream to write file: target\file-dest\MISA\Data_File_16.csv
o.a.c.c.file.GenericFileProducer : Wrote [target\file-dest\MISA\Data_File_16.csv] to [file://target/file-dest/MISA]
o.a.c.c.file.GenericFileOnCompletion : Done processing file: RemoteFile[Data_File_16.csv] using exchange: Exchange[ID-LON-2016-1516204084378-0-3]
o.a.c.p.i.FileIdempotentRepository : Appending Data_File_16.csv-192867682 to idempotent filestore: target\file-dest\.file-key-repo\repo
Ah you log the message after you download it, and you use streamDownload=true.
See this FAQ-why-is-my-message-body-empty and how you need to use stream caching if doing so.
Because the message is streaming based, then either do NOT log the message body (you can log headers etc) and then route it to the file endpoint so its saved directly as a file.

Apache Camel: Cached stream file deletion causing file not found errors

Scenario:
I am trying to stream and process some large xml files. These files are send from a producer asynchronously.
producerTemplate.sendBodyAndHeaders(endpointUri, inStream, ImmutableMap.of(JOBID_PROPERTY, importJob.getId()));
I need to batch all file input streams, identify the files by probing them with xpath and reorder them according to their content. I have the following route:
from("direct:route1")
.streamCaching()
.choice()
.when(xpath("//Tag1")) .setHeader("execOrder", constant(3)) .setHeader("xmlRoute", constant( "direct:some-route"))
.when(xpath("//Tag2")) .setHeader("execOrder", constant(1)) .setHeader("xmlRoute", constant( "direct:some-other-route"))
.when(xpath("//Tag3")) .setHeader("execOrder", constant(2)) .setHeader("xmlRoute", constant( "direct:yet-another-route"))
.otherwise()
.to("direct:somewhereelse")
.end()
.resequence(header("execOrder"))
.batch(new BatchResequencerConfig(300, 10000L))
.allowDuplicates()
.recipientList(header("xmlRoute"))
When running my code I get the following error:
2017-11-23 11:43:13.442 INFO 10267 --- [ - Batch Sender] c.w.n.s.m.DefaultImportJobService : Updating entity ImportJob with id 5a16a61803af33281b22c716
2017-11-23 11:43:13.451 WARN 10267 --- [ - Batch Sender] org.apache.camel.processor.Resequencer : Error processing aggregated exchange: Exchange[ID-int-0-142-bcd-wsint-pro-59594-1511433568520-0-20]. Caused by: [org.apache.camel.RuntimeCamelException - Cannot reset stream from file /var/folders/dc/fkrgdrnx6txbg7jfdjd_58mm0000gn/T/camel/camel-tmp-39abaae8-9bdd-435a-b63d-299ad8b06415/cos1499080503439465502.tmp]
org.apache.camel.RuntimeCamelException: Cannot reset stream from file /var/folders/dc/fkrgdrnx6txbg7jfdjd_58mm0000gn/T/camel/camel-tmp-39abaae8-9bdd-435a-b63d-299ad8b06415/cos1499080503439465502.tmp
at org.apache.camel.converter.stream.FileInputStreamCache.reset(FileInputStreamCache.java:91)
I've read here that the FileInputStreamCache is closed when the XPathBuilder.getDocument() is called, and the temp file is deleted, so you get the FileNotFoundException when the XPathBuilder wants to reset the InputStream
The solution seems to be to disable the spooling to disk like that:
camelContext.getStreamCachingStrategy().setSpoolThreshold(-1);
However, I don't want to do that because of RAM restrictions, i.e. files can get up to 600MB and I don't want to keep them in memory. Any ideas how to solve the problem?
The resequencer is a two-leg pattern (stateful) and will cause the original exchange to be done beforehand, as its keeping a copy in memory while re-sequencing until the gap is fulfilled and sending the messages out in the new order.
Since your input stream comes from some HTTP service then that would be closed beforehand the resequencer may output the exchange.
Either do as suggested to store to local disk first, and then let the resequencer work on that, or find a way not to use the resequencer.
I ended up doing what Claus and Ricardo suggested. I made a separate route which saves the files to disk. Then another one which probes the files and resequences the exchanges according to a fixed order.
String xmlUploadDirectory = "file://" + Files.createTempDir().path + "/xmls?noop=true"
from("direct:route1")
.to(xmlUploadDirectory)
from(xmlUploadDirectory)
.choice()
.when(xpath("//Tag1")).setHeader("execOrder", constant(3)).setHeader("xmlRoute", constant( "direct:some-route"))
.when(xpath("//Tag2")).setHeader("execOrder", constant(1)).setHeader("xmlRoute", constant( "direct:some-other-route"))
.when(xpath("//Tag3")).setHeader("execOrder", constant(2)).setHeader("xmlRoute", constant( "direct:yet-another-route"))
.otherwise()
.to("direct:somewhereelse")
.end()
.to("direct:resequencing")
from("direct:resequencing")
.resequence(header("execOrder"))
.batch(new BatchResequencerConfig(300, 10000L))
.allowDuplicates()
.recipientList(header("xmlRoute"))

Requested file does not exist Mule SFTP

I am polling from SFTP in mulesoft every second,fileAge is set to 0, connection pool size is 1 and autodelete is enabled. Then i save the file to the directory within a File connector which is polling ever 2 seconds and file age is 500(This is the outbound endpoint. Then the next flow starts with this same directory as File inbound endpoint and process the file. Here is polling set to every 3 seconds and autodelete is enabled.I get this error but file is processed..
java.io.IOException: The requested file does not exist (//file/7ggot1517.txt)
at org.mule.transport.sftp.SftpClient.getSize(SftpClient.java:499)
at org.mule.transport.sftp.SftpClient.retrieveFile(SftpClient.java:378)
...
Does anyone have some knowledge how to configure sftp and file connector to :
1.Read File From SFTP and delete it from SFTP
2.Process the File from local directory and delete it?
3.Get rid of that error
Thank you
Can you try the below configuration...I tried reading file from FTP to local directory ..
Replace FTP with SFTP
Use the little small groovy script provided in that.This should work .I just tested this and working as expected .Deleting can be done by autoDelete attribute or fileAge .Please let me know if this helps
<flow name="ftptestFlow">
<ftp:inbound-endpoint host="hostname" port="port" path="path/filename" user="userid" password="password" responseTimeout="10000" doc:name="FTP"/>
<set-variable variableName="fileName" value="fileName" doc:name="fileName"/>
<scripting:component doc:name="getFile">
<scripting:script engine="Groovy"><![CDATA[new File(flowVars.fileName).getText('UTF-8')]]></scripting:script>
</scripting:component>
<file:outbound-endpoint path="path" outputPattern="filename" responseTimeout="10000" doc:name="File"/>
</flow>
​
Your SFTP inbound endpoint probably tries to poll the file a first time, but a second poll is started before the first one had a chance to delete file. Something like this happens:
First poll - a file is found, let's read it => OK
First poll - read the file and process it => OK
Second poll - a file is found, let's read it => OK
First poll - processing finished, delete the file => OK
Second poll - read the file and process it => Error: file has been deleted
As you see a second poll detects the presence of the file before the first poll actually deletes it, but by the time it tries to read it the first poll already had the file deleted.
You can use the tempDir attribute on your SFTP inbound endpoint, it will move the file to a sub-directory of the folder where it is read before processing, ensuring subsequent polls are not triggered for the same file again. It then does something like:
First poll - a file is found, move it to tempDir and let's read it => OK
First poll - read the file and process it => OK
Second poll - No file found (it has been moved!) => OK
First poll - processing finished, delete the file => OK
Such as:
<sftp:inbound-endpoint connector-ref="SFTP"
tempDir="${ftp.path}/tmpPoll"
host="${ftp.host}"
port="${ftp.port}"
path="${ftp.path}"
user="${ftp.user}"
password="${ftp.password}" doc:name="SFTP" responseTimeout="10000"/>
You also need to make sure the SFTP user can read/write the sub-dir or create it if necessary. Everything is documented here.
EDIT: and to delete your file from the local machine you can simply use a Java or Groovy component once it has been properly processed
try {
Files.delete(filePath);
} catch (...) {
}

I want to get the file in response stream after long period of time

I have internal tool which will show the results by pulling it from DB for 100k records
At one occasion I have to do web service call for each record(I can't store it in DB as there is a security issue),
Please find my design below,
Step1:
User clicks the button(get file)
Step2:
I am calling the DB and we service with use of thread and thread pool
and writing into the CSV file
It taking 5 mins to process as I have 100k web service call,
Step3:
Once the step 2 done I have to read CSV the file and will write into bytearrayoutputstream ,
Set the response content type as (text /csv),
I will write those into response which will download the csv files into user system.
Issue:
I am getting proxy server issue as it taking more than 5 mins,
I am getting Http session time out or 502 invalid response.
Any suggestions to give files to user ?
Please help me to resolve this issue if you can.
p.s as it is internal tool user ready to wait more than five mins to get the file
I can't change the step 1 and step 2.

Categories

Resources