Incrementing test data (mobile number) for load testing 1 million registrations - java

I am trying to load test a Register-Search application which will do as the name suggests for ~5 million mobile numbers. Will be using 100-500 threads with looping through a specific delay between each loop.
I have the functional test JMeter script ready for the same. The only change I want to do is generate the mobile number automatically.
The easiest solution would be doing having the mobileNumber as ${random(${min},${max})}. But I want to avoid it and get a more linearised approach by using property mobileNumber
In a JSR223 Sampler (using Groovy script), I was trying to read the property as
long number = ${__P(mobileNumber)}
vars.put("mobileNumber", String.valueOf(number))
I wish to use the UDV mobileNumber thus created in current thread and increment the property mobileNumber by 100. Trying to do:
number = number + 100
${__setProperty(mobileNumber, String.valueOf(number))
For some reasons it is not working and giving error message Response message:
javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: number for class: Script1
Cant figure out whats wrong ?

You can do it without any scripting by using just JMeter Functions as:
${__longSum(${__P(mobileNumber)},100,tempNumber)} which
reads mobileNumber property
adds 100 to it
stores the result into tempNumber variable (however if you don't need it you can omit this)
${__setProperty(mobileNumber,${tempNumber},)} - store tempNumber variable value as mobileNumber property
Functions used are:
__longSum - computes sum of 2 or more long values
__P - returns value of a JMeter Property
__setProperty - assigns value to a JMeter Property

Related

How to enable `relevance-trace` using MarkLogic Java API?

I'm implementing quite a complex search using MarkLogic Java API. I would like to enable relevance-trace (Relavance trace) to see how my results are scored. Unfortunately, I don't know how to enable it in Java API. I have tried something like:
DatabaseClient client = initClient();
var qmo = client.newServerConfigManager().newQueryOptionsManager();
var searchOptions = "<search:options xmlns=\"http://marklogic.com/appservices/search\">\n"
+ " <search-option>relevance-trace</search-option>\n"
+ " </search:options>";
qmo.writeOptions("searchOptions", new StringHandle(searchOptions).withFormat(Format.XML));
QueryManager qm = client.newQueryManager();
StructuredQueryBuilder qb = qm.newStructuredQueryBuilder("searchOptions");
// query definition
qm.search(query, new SearchHandle())
Unfortunately it ends up with following error:
"Local message: /config/query write failed: Internal Server Error. Server Message: XDMP-DOCNONSBIND:
xdmp:get-request-body(\"xml\") -- No namespace binding for prefix search at line 1 . See the
MarkLogic server error log for further detail."
My question is how to use search options in MarkLogic API, especially I'm interested in relevance-trace and simple-score
Update 1
As suggested by #Jamess Kerr I have change my options to
var searchOptions = "<options xmlns=\"http://marklogic.com/appservices/search\">\n"
+ " <search-option>relevance-trace</search-option>\n"
+ " </options>";
but unfortunately, it still doesn't work. After that change I get error:
Local message: /config/query write failed: Internal Server Error. Server Message: XDMP-UPDATEFUNCTIONFROMQUERY: xdmp:apply(function() as item()*) -- Cannot apply an update function from a query . See the MarkLogic server error log for further detail.
Your search options XML uses the search: namespace prefix but you don't define that prefix. Since you are setting the default namespace, just drop the search: prefix from the search:options open and close tags.
The original Java Query contains both syntactical and semantic issues:
First of all, it is an invalid MarkLogic XQuery in the sense that it has only query option(s) portion. Bypassing the namespace binding prefix is another wrong end of the stick.
To tweak your original query, please replace a search text in between the search:qtext tag ( the pink line ) and run the query.
Result:
Matched and Listing 2 documents:
Matched 1 locations in /medals/coin_1333113127296.xml with 94720 score:
73. …pulsating maple leaf coin another world-first, the [Royal Canadian Mint]is proud to launch a numismatic breakthrough from its ambitious and creative R&D team...
Matched 1 locations in /medals/coin_1333078361643.xml with 94720 score:
71. ...the [Royal Canadian Mint]and Royal Australian Mint have put an end to the dispute relating to the circulation coin colouring process...
Without a semantic criterion, to put it into context, your original query will be an equivalent of removing the search:qtext and performing a fuzzy search.
Note:
If you use serialised term search or search constraints instead of text search, you should get higher score results.
MarkLogic Java API operates in unfiltered mode by default, while the cts:search operates in filtered mode by default. Just be mindful of how you construct the query and the expected score in Java API.
And it is really intended for bulk data write/extract/transformation. qconsole is, in my opinion, more befitting to tune specific query and gather search score, relevance and computation details.

Error: 1553 - AQL: bind parameter '#myParameter' has an invalid value or type (while parsing) - ERROR_QUERY_BIND_PARAMETER_TYPE

I have the following query:
FOR i IN items
COLLECT otherId = i._from WITH COUNT INTO counter
FILTER counter > ##myParameter
RETURN otherId
Doesn't matter if it's executed from Java or the ArangoDB Web Interface, I always get the same error back:
Query: AQL: bind parameter '#myParameter' has an invalid value or type (while parsing)
If I replace ##myParameter with a number it works.
Any idea? In Java I have tried with Integer, Long, BigInteger with no luck :-(
ArangoDB COMMUNITY EDITION v3.2.5
Non-collection parameters require a single #

Neo4J REST API Call Error - Error reading as JSON ''

this is a newo4j rest api call related error - from my java code I'm making a REST API call to a remote Neo4J Database by passing query and parameters, the query being executed is as below
*MERGE (s:Sequence {name:'CommentSequence'}) ON CREATE SET s.current = 1 ON MATCH SET s.current=s.current+1 WITH s.current as sequenceCounter MERGE (cmnt01:Comment {text: {text}, datetime:{datetime}, type:{type}}) SET cmnt01.id = sequenceCounter WITH cmnt01 MATCH (g:Game {game_id:{gameid}}),(b:Block {block_id:{bid}, game_id:{gameid}}),(u:User {email_id:{emailid}}) MERGE (b)-[:COMMENT]->(cmnt01)<-[:COMMENT]-(u)*
Basically this query is generating a sequence number at run time and sets the 'CommentId' property of the Comment Node as this Sequence number before attaching the comment node to a Game's block i.e. For every comment added by the user I'm adding a sequence number as it's id.
This is working for almost 90% of the cases but there are couple of cases in a day when it fails with below error
ERROR com.exectestret.dao.BaseGraphDAO - Query execution error:**Error reading as JSON ''**
Why does the Neo4J Query not return any proper error code ? It just says error reading as JSON ''.
Neo4J Version is
Neo4j Community Edition 2.2.1
Thanks,
Deepesh
It gets HTML back and can't read it as JSON, but should output the failure HTML can you check the log output for that and share it too?
Also check your graph.db/messages.log and data/log/console.log for any error messages.

SearchContextMissingException Failed to execute fetch phase [search/phase/fetch/id]

Cluser: I am Using elasticsearch 1.3.1 with 6 nodes in different servers, which are all connected with by LAN. The bandwidth is high and the each one has 45 GB RAM in it.
Configuration The Heap size we allocated for the node to run is 10g. We do have the elasticsearch default configuration except the unique discoverym, cluster name, node name and we 2 zone. 3 node belongs to one zone and the other belongs to another zone.
indices: 15, total size of the indices is 76GB.
Now a days i am facing the SearchContextMissingException exception as DEBUG log. It smells like some search query has taken to much of time to fetch. but I checked with queries, there was no query to produce high amount of load to the cluster... I am wondering why this happen.
Issue: Due to this issue one by one all the nodes start to collect GC. and result in the OOM :(
Here is my exception. Please kindly explain me 2 things.
What is SearchContextMissingException? Why it happen?
How can we prevent the cluster from these type of query?
The Error:
[YYYY-MM-DD HH:mm:ss,039][DEBUG][action.search.type ] [es_node_01] [5031530]
Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [es_node_02][inet[/1x.x.xx.xx:9300]][search/phase/fetch/id]
Caused by: org.elasticsearch.search.SearchContextMissingException: No search context found for id [5031530]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:480)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:450)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:793)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:782)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
If you can, update to 1.4.2. It fixes some known resilience issues, including cascading failures like you describe.
Regardless of that, the default configuration will definitely get you in trouble. Minimum, you may need to look at setting up circuitbreakers for e.g. field data caches.
Here's a snippet lifted from our production configuration. I assume you have also configured linux filehandles limits correctly: see here
# prevent swapping
bootstrap.mlockall: true
indices.breaker.total.limit: 70%
indices.fielddata.cache.size: 70%
# make elasticsearch work harder to migrate/allocate indices on startup (we have a lot of shards due to logstash); default was 2
cluster.routing.allocation.node_concurrent_recoveries: 8
# enable cors
http.cors.enabled: true
http.cors.allow-origin: /https?:\/\/(localhost|kibana.*\.linko\.io)(:[0-9]+)?/
index.query.bool.max_clause_count: 4096
The same error (or debug statement) still occurs in 1.6.0, and is not a bug.
When you create a new scroll request:
SearchResponse scrollResponse = client.prepareSearch(index).setTypes(types).setSearchType(SearchType.SCAN)
.setScroll(new TimeValue(60000)).setSize(maxItemsPerScrollRequest).setQuery(ElasticSearchQueryBuilder.createMatchAllQuery()).execute().actionGet();
String scrollId = scrollResponse.getScrollId();
a new scroll id is created (apart from the scrollId the response is empty). To fetch the results:
long resultCounter = 0l; // to keep track of the number of results retrieved
Long nResultsTotal = null; // total number of items we will be expecting
do {
final SearchResponse response = client.prepareSearchScroll(scrollId).setScroll(new TimeValue(600000)).execute().actionGet();
// handle result
if(nResultsTotal==null) // if not initialized
nResultsTotal = response.getHits().getTotalHits(); //set total number of Documents
resultCounter += response.getHits().getHits().length; //keep track of the items retrieved
} while (resultCounter < nResultsTotal);
This approach works regardless of the number of shards you have. Another option is to add a break statement when:
boolean breakIf = response.getHits().getHits().length < (nShards * maxItemsPerScrollRequest);
The number of items to be returned is maxItemsPerScrollRequest (per shard!), so we'd expect the number of items requested multiplied by the number of shards. But when we have multiple shards, and one of those is out of documents, whereas others do not, then the former method will still give us all available documents. The latter will stop prematurely - I expect (haven't tried!)
Another way to stop seeing this exception (since it is 'only' DEBUG), is to open the logging.yml file in the config directory of ElasticSearch, then change:
action: DEBUG
to
action: INFO

Iterating over every document in Lotus Domino

I'd like iterate over every document in a (probably big) Lotus Domino database and be able to continue it from the last one if the processing breaks (network connection error, application restart etc.). I don't have write access to the database.
I'm looking for a way where I don't have to download those documents from the server which were already processed. So, I have to pass some starting information to the server which document should be the first in the (possibly restarted) processing.
I've checked the AllDocuments property and the DocumentColletion.getNthDocument method but this property is unsorted so I guess the order can change between two calls.
Another idea was using a formula query but it does not seem that ordering is possible with these queries.
The third idea was the Database.getModifiedDocuments method with a corresponding Document.getLastModified one. It seemed good but
it looks to me that the ordering of the returned collection is not documented and based on creation time instead of last modification time.
Here is a sample code based on the official example:
System.out.println("startDate: " + startDate);
final DocumentCollection documentCollection =
database.getModifiedDocuments(startDate, Database.DBMOD_DOC_DATA);
Document doc = documentCollection.getFirstDocument();
while (doc != null) {
System.out.println("#lastmod: " + doc.getLastModified() +
" #created: " + doc.getCreated());
doc = documentCollection.getNextDocument(doc);
}
It prints the following:
startDate: 2012.07.03 08:51:11 CEDT
#lastmod: 2012.07.03 08:51:11 CEDT #created: 2012.02.23 10:35:31 CET
#lastmod: 2012.08.03 12:20:33 CEDT #created: 2012.06.01 16:26:35 CEDT
#lastmod: 2012.07.03 09:20:53 CEDT #created: 2012.07.03 09:20:03 CEDT
#lastmod: 2012.07.21 23:17:35 CEDT #created: 2012.07.03 09:24:44 CEDT
#lastmod: 2012.07.03 10:10:53 CEDT #created: 2012.07.03 10:10:41 CEDT
#lastmod: 2012.07.23 16:26:22 CEDT #created: 2012.07.23 16:26:22 CEDT
(I don't use any AgentContext here to access the database. The database object comes from a session.getDatabase(null, databaseName) call.)
Is there any way to reliably do this with the Lotus Domino Java API?
If you have access to change the database, or could ask someone to do so, then you should create a view that is sorted on a unique key, or modified date, and then just store the "pointer" to the last document processed.
Barring that, you'll have to maintain a list of previously processed documents yourself. In that case you can use the AllDocuments property and just iterate through them. Use the GetFirstDocument and GetNextDocument as they are reportedly faster than GetNthDocument.
Alternatively you could make two passes, one to gather a list of UNIDs for all documents, which you'll store, and then make a second pass to process each document from the list of UNIDs you have (using GetDocumentByUNID method).
I don't use the Java API, but in Lotusscript, I would do something like this:
Locate a view displaying all documents in the database. If you want the agent to be really fast, create a new view. The first column should be sorted and could contain the Universal ID of the document. The other columns contains all the values you want to read in your agent, in your example that would be the created date and last modified date.
Your code could then simply loop through the view like this:
lastSuccessful = FunctionToReadValuesSomewhere() ' Returns 0 if empty
Set view = thisdb.GetView("MyLookupView")
Set col = view.AllEntries
Set entry = col.GetFirstEntry
cnt = 0
Do Until entry is Nothing
cnt = cnt + 1
If cnt > lastSuccessful Then
universalID = entry.ColumnValues(0)
createDate = entry.ColumnValues(1)
lastmodifiedDate = entry.ColumnValues(2)
Call YourFunctionToDoStuff(universalID, createDate, lastmodifiedDate)
Call FunctionToStoreValuesSomeWhere(cnt, universalID)
End If
Set entry = col.GetFirstEntry
Loop
Call FunctionToClearValuesSomeWhere()
Simply store the last successful value and Universal ID in say a text file or environment variable or even profile document in the database.
When you restart the agent, have some code that check if the values are blank (then return 0), otherwise return the last successful value.
Agents already keep a field to describe documents that they have not yet processed, and these are automatically updated via normal processing.
A better way of doing what you're attempting to do might be to store the results of a search in a profile document. However, if you're trying to relate to documents in a database you do not have write permission to, the only thing you can do is keep a list of the doclinks you've already processed (and any information you need to keep about those documents), or a sister database holding one document for each doclink plus multiple fields related to the processing you've done on them. Then, transfer the lists of IDs and perform the matching on the client to do per-document lookups.
Lotus Notes/Domino databases are designed to be distributed across clients and servers in a replicated environment. In the general case, you do not have a guarantee that starting at a given creation or mod time will bring you consistent results.
If you are 100% certain that no replicas of your target database are ever made, then you can use getModifiedDocuments and then write a sort routine to place (modDateTime,UNID) pairs into a SortedSet or other suitable data structure. Then you can process through the Set, and if you run into an error you can save the modDateTime of the element that you were attempting to process as your restart point. There may be a few additional details for you to work out to avoid duplicates, however, if there are multiple documents with the exact same modDateTime stamp.
I want to make one final remark. I understand that you are asking about Java, but if you are working on a backup or archiving system for compliance purposes, the Lotus C API has special functions that you really should look at.

Categories

Resources