How To Pass Variable To Pepper-Box Plain Text Config - java

I'm setting up my Pepper-Box Plain text Config to Pass variable using ${accountNumber}, ${{accountNumber}}, {{accountNumber}}, and using function to return string, but it didn't work.
This is my message to kafka :
{
"eventName": "OFFER",
"payload": {
"accountNumber": "${accountNumber}",
"Limit": 20000000
}
}
but the variable didn't pass, when I see the debug sampler, the accountNumber is pass.
I think there is mistake when I call the variable, but I try some techniques but it didnt work too.
The error Message when I try ${{accountNumber}} is :
symbol: method accountNumber()
location: class MessageIterator1566802574812
1 error
Uncaught Exception java.lang.ClassFormatError: Truncated class file. See log file for details.

It looks like a limitation of the plugin, you're basically limited to Schema Template Functions
Alternatively you can send a record to Kafka using JSR223 Sampler and the following Groovy code:
import org.apache.jmeter.threads.JMeterVariables
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.clients.producer.ProducerRecord
def kafkaProps = new Properties()
kafkaProps.put(org.apache.kafka.clients.producer.ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")
kafkaProps.put(org.apache.kafka.clients.producer.ProducerConfig.CLIENT_ID_CONFIG, "KafkaExampleProducer")
kafkaProps.put(org.apache.kafka.clients.producer.ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.LongSerializer.class.getName())
kafkaProps.put(org.apache.kafka.clients.producer.ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class.getName())
def producer = new KafkaProducer<>(kafkaProps)
JMeterVariables vars = new JMeterVariables()
vars.put("accountNumber", "foo")
def record = new ProducerRecord<>("test", "{\n" +
" \"eventName\": \"OFFER\",\n" +
" \"payload\": {\n" +
" \"accountNumber\": \"" + vars.get("accountNumber") + "\",\n" +
" \"Limit\": 20000000\n" +
" }\n" +
"}")
producer.send(record)
More information: Apache Kafka - How to Load Test with JMeter

Related

MarkLogic Wilcard Search - QConsole vs. Java API

I believe I’m seeing different results from a Java-based query and what I believe is the equivalent cts:search in the query console. There's a lot of information here and I tried to organize it appropriately. Here are the steps to set up a simple example that replicates what I’m seeing.
Create new database with default settings
Add new forest with default settings
Enable three character searches (only non-default database setting)
Insert the three json documents below into the database
Query console returns doc2. Java client returns doc2 AND doc1. Why? I would expect the same results from each. I want to get the results in Java that the query console is returning. Am I writing the query definition in Java incorrectly?
It looks like the Java client wildcard search is searching the entire document even though I’ve specified that I only want to do a wildcard search inside of the given json-property (name.)
Is there a way to see or log the resultant server-side “cts query” given a client-side RawCombinedQueryDefinition? I'd like to see what the Java request gets translated into on the server side.
doc1.json
{
"state": "OH",
"city": "Dayton",
"notes": "not Cincinnati"
}
doc2.json
{
"state": "OH",
"city": "Cincinnati",
"notes": "real city"
}
doc3.json
{
"state": "OH",
"city": "Daytona",
"notes": "this is a made up city"
}
Query console code used to insert documents
xquery version "1.0-ml";
xdmp:document-load("/some/path/doc1.json",
<options xmlns="xdmp:document-load">
<uri>/doc1.json</uri>
</options>
);
Query console code used to search
xquery version "1.0-ml";
cts:search(fn:collection(),
cts:and-query((
cts:json-property-value-query("state", "OH"),
cts:json-property-value-query("city", "*Cincinnati*")
))
)
Java QueryManager query in easy to read text
{
"search": {
"query": {
"queries": [
{
"value-query": {
"type": "string",
"json-property": "state",
"text": "OH"
}
},
{
"value-query": {
"type": "string",
"json-property": "city",
"text": "*Cincinnati*"
}
}
]
}
}
}
Java code
import com.marklogic.client.DatabaseClient;
import com.marklogic.client.DatabaseClientFactory;
import com.marklogic.client.document.DocumentPage;
import com.marklogic.client.document.DocumentRecord;
import com.marklogic.client.document.JSONDocumentManager;
import com.marklogic.client.io.Format;
import com.marklogic.client.io.StringHandle;
import com.marklogic.client.query.QueryManager;
import com.marklogic.client.query.RawCombinedQueryDefinition;
import org.junit.Test;
public class MarkLogicTest
{
#Test
public void testWildcardSearch()
{
DatabaseClientFactory.SecurityContext securityContext = new DatabaseClientFactory.DigestAuthContext("admin", "admin");
DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8000, "test", securityContext);
QueryManager queryManager = client.newQueryManager();
JSONDocumentManager documentManager = client.newJSONDocumentManager();
String query = "{\n" +
" \"search\": {\n" +
" \"query\": {\n" +
" \"queries\": [\n" +
" {\n" +
" \"value-query\": {\n" +
" \"type\": \"string\",\n" +
" \"json-property\": \"state\",\n" +
" \"text\": \"OH\"\n" +
" }\n" +
" },\n" +
" {\n" +
" \"value-query\": {\n" +
" \"type\": \"string\",\n" +
" \"json-property\": \"city\",\n" +
" \"text\": \"*Cincinnati*\"\n" +
" }\n" +
" }\n" +
" ]\n" +
" }\n" +
" }\n" +
"}";
StringHandle queryHandle = new StringHandle(query).withFormat(Format.JSON);
RawCombinedQueryDefinition queryDef = queryManager.newRawCombinedQueryDefinition(queryHandle);
DocumentPage documents = documentManager.search(queryDef, 1);
while (documents.hasNext())
{
DocumentRecord document = documents.next();
StringHandle resultHandle = document.getContent(new StringHandle());
String result = resultHandle.get();
System.out.println(result);
}
}
}
System.out.println() results
{"state":"OH", "city":"Dayton", "notes":"not Cincinnati"}
{"state":"OH", "city":"Cincinnati", "notes":"real city"}
Why does the Java client return the first result where city = Dayton?
Thanks in advance!
The REST API and thus the Java API executes an unfiltered search by default (meaning, the matches are based entirely on the indexes). By contrast, cts:search() executes a filtered search by default (meaning, the result documents are inspected to throw out false positives).
If you add the "unfiltered" option to cts:search(), it also returns both documents.
The quick fix is to add the "filtered" option to the Java API search, but the better fix for performance at scale is to refine the indexes to support exact matching for the required wildcard queries.
Elements are correlated with wildcards based on position.
Thus, for this query, I believe you need to turn on the index configurations for element word positions and for three character word positions.
Hoping that helps,
From a quick look at the above code, You do not have the AND query in your java example. Therefore it is an or-query fo Ohio OR Cincinnati.

Base64 JAVA encode with dynamic values in SCALA - GATLING

I'm USING GATLING AND trying to use in java's library "Base64" in scala for sending encode uder:password in header ("authorization") request, with dynamic values:
I'm trying to do as follow :
val register = {
exec(request.asJSON
.check(status.is(200))
.check(jsonPath("$..user").saveAs("user"))
.check(jsonPath("$..password").saveAs("password"))
).pause(1)
}
val myvalue: HttpRequestBuilder = Utils.createPostFormParamsRequest(
"myvalue",
login,
Map("value"-> ("Basic " + Base64.getEncoder.encodeToString((("${user}").getBytes() + ":" + ("${password}").getBytes()).getBytes("utf-8")))),
Map())
I'd tried also Base64.getEncoder.encodeToString(("${uesr}" + ":" + "${password}").getBytes("utf-8"))))
But it seems like the Base64 take the String "${user}" and not the actual value, so the encryption does not work properly.
I'd tried to :
val helper = {
exec { session =>
val user : String= (session("user").as[String])
val password : String= (session("password").as[String])
val temp = "Basic " + Base64.getEncoder.encodeToString((user + ":" + password).getBytes("utf-8"))
val temp2: HttpRequestBuilder = Utils.createPostFormParamsRequest(
"bla",
login,
Map("value"-> temp),
Map())
val assert = {
exec(helper.asJSON
.check(status.is(200))
.check(header("answer").saveAs("answer"))
).pause(1)
}
session
}
And here the encryption works properly, but the "exec" do not.
There is a way to save the values in run time without part of the exec?
I don't know Gatling that well, but I think this should work. It's not the prettiest but without seeing the full code and how it's used it's a bit difficult to come up with something that looks good:
var token: String = null
val registerAssert = exec(...)
def finalToken = {
Utils.createPostFormParamsRequest(
"Final token",
Constants.LOGIN,
Map("Authorization"-> token),
Map())
}
def saveToken(s: Session) = {
token = "Basic " + Base64.getEncoder.encodeToString((s("uuid").as[String].getBytes() + ":" + s("secret").as[String].getBytes()).getBytes("utf-8")
s
}
// now you're actually executing the above
scenario(...)
.exec(registerAssert)
.exec(saveToken(_))
.exec(finalToken) // I'm assuming finalToken is executable
The intention of this is to first save the token value in a class variable, and then only construct the finalToken request (which uses that token) afterwards. Hence the def, and when it's called the token value will have been set.

How to create a WSResponse object from string for Play WSClient

Documentation suggests testing API client based on WSClient using a mock web service, that is, create a play.server.Server which will respond to real HTTP requests.
I would prefer to create WSResponse objects directly from files, complete with status line, header lines and body, without real TCP connections. That would require less dependencies and run faster. Also there may be other cases when this is useful.
But I can't find a simple way to do it. It seems all implementations wrapped by WSResponse are tied to reading from network.
Should I just create my own subclass of WSResponse for this, or maybe I'm wrong and it already exists?
The API for Play seems intentionally obtuse. You have to use their "Cacheable" classes, which are the only ones that seem directly instantiable from objects you'd have lying around.
This should get you started:
import play.api.libs.ws.ahc.AhcWSResponse;
import play.api.libs.ws.ahc.cache.CacheableHttpResponseBodyPart;
import play.api.libs.ws.ahc.cache.CacheableHttpResponseHeaders;
import play.api.libs.ws.ahc.cache.CacheableHttpResponseStatus;
import play.shaded.ahc.io.netty.handler.codec.http.DefaultHttpHeaders;
import play.shaded.ahc.org.asynchttpclient.Response;
import play.shaded.ahc.org.asynchttpclient.uri.Uri;
AhcWSResponse response = new AhcWSResponse(new Response.ResponseBuilder()
.accumulate(new CacheableHttpResponseStatus(Uri.create("uri"), 200, "status text", "protocols!"))
.accumulate(new CacheableHttpResponseHeaders(false, new DefaultHttpHeaders().add("My-Header", "value")))
.accumulate(new CacheableHttpResponseBodyPart("my body".getBytes(), true))
.build());
The mystery boolean values aren't documented. My guess is the boolean for BodyPart is whether that is the last part of the body. My guess for Headers is whether the headers are in the trailer of a message.
I used another way, mocking WSResponse with Mockito:
import play.libs.ws.WSRequest;
import play.libs.ws.WSResponse;
import org.mockito.Mockito;
...
final WSResponse wsResponseMock = Mockito.mock(WSResponse.class);
Mockito.doReturn(200).when(wsResponseMock).getStatus();
final String jsonStr = "{\n"
+ " \"response\": {\n"
+ " \"route\": [\n"
+ " { \"summary\" :\n"
+ " {\n"
+ " \"distance\": 23\n"
+ " }\n"
+ " }\n"
+ " ]\n"
+ " }\n"
+ "}";
ObjectMapper mapper = new ObjectMapper();
JsonNode jsonNode = null;
try {
jsonNode = mapper.readTree(jsonStr);
} catch (IOException e) {
e.printStackTrace();
}
Mockito.doReturn(
jsonNode)
.when(wsResponseMock)
.asJson();
If you are using play-framework 2.8.x and scala, the below code can help us generate a dummy WSResponse:
import play.api.libs.ws.ahc.AhcWSResponse
import play.api.libs.ws.ahc.cache.CacheableHttpResponseStatus
import play.shaded.ahc.org.asynchttpclient.Response
import play.shaded.ahc.org.asynchttpclient.uri.Uri
import play.api.libs.ws.ahc.cache.CacheableHttpResponseBodyPart
import play.shaded.ahc.io.netty.handler.codec.http.DefaultHttpHeaders
class OutputWriterSpec extends FlatSpec with Matchers {
val respBuilder = new Response.ResponseBuilder()
respBuilder.accumulate(new CacheableHttpResponseStatus(Uri.create("http://localhost:9000/api/service"), 202, "status text", "json"))
respBuilder.accumulate(new DefaultHttpHeaders().add("Content-Type", "application/json"))
respBuilder.accumulate(new CacheableHttpResponseBodyPart("{\n\"id\":\"job-1\",\n\"lines\": [\n\"62812ce276aa9819a2e272f94124d5a1\",\n\"13ea8b769685089ba2bed4a665a61fde\"\n]\n}".getBytes(), true))
val resp = new AhcWSResponse(respBuilder.build())
val outputWriter = OutputWriter
val expected = ("\"job-1\"", List("\"62812ce276aa9819a2e272f94124d5a1\"", "\"13ea8b769685089ba2bed4a665a61fde\""), "_SUCCESS")
"Output Writer" should "handle response from api call" in {
val actual = outputWriter.handleResponse(resp, "job-1")
println("the actual : " + actual)
actual shouldEqual(expected)
}
}

Avro schema resolution for evolving a field from a primitive to a union

I'm working with Avro 1.7.0 using the its Java's generic representation API and I have a problem dealing with our current case of schema evolution. The scenario we're dealing with here is making a primitive-type field optional by changing the field to be a union of null and that primitive type.
I'm going to use a simple example. Basically, our schemas are:
Initial: A record with one field of type int
Second version: Same record, same field name but the type is now a union of null and int
According to the schema resolution chapter of Avro's spec, the resolution for such a case should be:
if reader's is a union, but writer's is not
The first schema in the reader's union that matches the writer's schema is recursively resolved against it. If none match, an error is signalled.
My interpretation is that we should resolve data serialized with the initial schema properly as int is part of the union in the reader's schema.
However, when running a test of reading back a record serialized with version 1 using the version 2, I get
org.apache.avro.AvroTypeException: Attempt to process a int when a union was expected.
Here's a test that shows exactly this:
#Test
public void testReadingUnionFromValueWrittenAsPrimitive() throws Exception {
Schema writerSchema = new Schema.Parser().parse("{\n" +
" \"type\":\"record\",\n" +
" \"name\":\"NeighborComparisons\",\n" +
" \"fields\": [\n" +
" {\"name\": \"test\",\n" +
" \"type\": \"int\" }]} ");
Schema readersSchema = new Schema.Parser().parse(" {\n" +
" \"type\":\"record\",\n" +
" \"name\":\"NeighborComparisons\",\n" +
" \"fields\": [ {\n" +
" \"name\": \"test\",\n" +
" \"type\": [\"null\", \"int\"],\n" +
" \"default\": null } ] }");
// Writing a record using the initial schema with the
// test field defined as an int
GenericData.Record record = new GenericData.Record(writerSchema);
record.put("test", Integer.valueOf(10));
ByteArrayOutputStream output = new ByteArrayOutputStream();
JsonEncoder jsonEncoder = EncoderFactory.get().
jsonEncoder(writerSchema, output);
GenericDatumWriter<GenericData.Record> writer = new
GenericDatumWriter<GenericData.Record>(writerSchema);
writer.write(record, jsonEncoder);
jsonEncoder.flush();
output.flush();
System.out.println(output.toString());
// We try reading it back using the second schema
// version where the test field is defined as a union of null and int
JsonDecoder jsonDecoder = DecoderFactory.get().
jsonDecoder(readersSchema, output.toString());
GenericDatumReader<GenericData.Record> reader =
new GenericDatumReader<GenericData.Record>(writerSchema,
readersSchema);
GenericData.Record read = reader.read(null, jsonDecoder);
// We should be able to assert that the value is 10 but it
// fails on reading the record before getting here
assertEquals(10, read.get("test"));
}
I would like to either know if my expectations are correct (this should resolve successfully right?) or where I'm not using avro properly to handle such a scenario.
The expectations that migrating a primitive schema to a union of null and a primitive are correct.
The problem with the code above is with how the decoder is created. The decoder needs the writer's schema rather than the reader's schema.
Rather than doing this:
JsonDecoder jsonDecoder = DecoderFactory.get().
jsonDecoder(readersSchema, output.toString());
It should be like this:
JsonDecoder jsonDecoder = DecoderFactory.get().
jsonDecoder(writerSchema, output.toString());
Credits goes to Doug Cutting for the answer on avro's user mailing list:
http://mail-archives.apache.org/mod_mbox/avro-user/201208.mbox/browser

Spring autowiring failing to resolve JRuby script dependency

I'm trying to invoke a Ruby method from Java using the example code from:
https://github.com/tc/call-jruby-from-java-example
Here is what the java code looks with the embedded Ruby script:
#Service
public class ProcessorImpl extends RubyObject implements IProcessor {
private static final Ruby __ruby__ = Ruby.getGlobalRuntime();
private static final RubyClass __metaclass__;
static {
String source = new StringBuilder(
"require 'java'\n" +
"require 'resque'\n" +
"\n" +
"class SaveData\n" +
" #queue = :general\n" +
"end\n" +
" \n" +
"class JRubyResqueImpl\n" +
" include Java::IProcessor\n" +
" \n" +
" java_signature 'void enqueue( Object )'\n" +
" def enqueue( data )\n" +
" Resque.enqueue( SaveData, data )\n" +
" end\n" +
"end\n" +
"").toString();
__ruby__.executeScript(source, "JRubyResqueImpl.rb");
RubyClass metaclass = __ruby__.getClass("JRubyResqueImpl");
metaclass.setRubyStaticAllocator(ActProcessorImpl.class);
__metaclass__ = metaclass;
}
public ActProcessorImpl(Ruby runtime, RubyClass metaClass)
{
super(runtime, metaClass);
}
public static IRubyObject __allocate__(Ruby ruby, RubyClass metaClass)
{
return new ActProcessorImpl(ruby, metaClass);
}
public ActProcessorImpl()
{
this(__ruby__, __metaclass__);
}
#Override
public void enqueue(Object obj)
{
ObjectMapper mapper = new ObjectMapper();
OutputStream os = new ByteArrayOutputStream();
try {
mapper.writeValue(os, obj);
} catch (Exception e) {
throw new RuntimeException(e);
}
String json = os.toString();
IRubyObject rbJson = JavaUtil.convertJavaToRuby(__ruby__, json);
RuntimeHelpers.invoke(__ruby__.getCurrentContext(), this, "enqueue",rbJson);
}
}
When the Spring Framework IoC module is doing the autowiring it tries to instantiate this class which fails with the following error message:
org.jruby.exceptions.RaiseException: (LoadError) no such file to load -- resque
I don't see any errors when I take the embedded Ruby script and run it via the CLI using the command:
jruby -S JRubyResqueImpl.rb
Where the content of JRubyResqueImpl.rb is:
require 'java'
require 'resque'
class SaveData
#queue = :general
end
class JRubyResqueImpl
include Java::IProcessor
java_signature 'void enqueue( Object )'
def enqueue( data )
Resque.enqueue( SaveData, data )
end
end
I've configured the environment variables GEM_HOME, GEM_PATH and set JRUBY_OPTS=--1.9.
Using Oracle Java 1.6.0_25, JRuby 1.6.4 and Resque 1.19.0 running under Ubuntu 11.10.
Thanks in advance.
I was able to make some progress by explicitly loading the dependencies in the embedded ruby script like so:
//java code
String source = new StringBuilder(
"require 'java'\n" +
"load '/usr/local/jruby/jruby-1.6.4/lib/ruby/1.9/singleton.rb'\n" +
"load '/usr/local/jruby/jruby-1.6.4/lib/ruby/gems/gems/monitor-0.1.3/lib/monitor/controller.rb'\n" +
"load '/usr/local/jruby/jruby-1.6.4/lib/ruby/gems/gems/monitor-0.1.3/lib/monitor.rb'\n" +
"load'/usr/local/jruby/jruby-1.6.4/lib/ruby/gems/redis-2.2.2/lib/redis.rb'\n" +
"load '/usr/local/jruby/jruby-1.6.4/lib/ruby/gems/redis-namespace-1.0.3/lib/redis-namespace.rb'\n" +
"load '/usr/local/jruby/jruby-1.6.4/lib/ruby/gems/resque-1.19.0/lib/resque.rb'\n" +
"\n" +
etc...
But now I see the following error from Spring IoC:
org.jruby.exceptions.RaiseException: (LoadError) no such file to load -- singleton
Still stuck...

Categories

Resources