I came across a use-case where I need to update a specific field in the ElasticSearch document. So for this use-case, I have used the Update API with a script ES doc. But I faced an issue(compilation error) with Script Constructor which accepts the following parameters:--> type, lang, idOrCode and params and the issue was with params(java.util.Map) parameter.
I have even tried the Scala to Java converters but could not solve it.
Code snippet
import org.elasticsearch.action.update.UpdateRequest
import org.elasticsearch.client.RequestOptions
import org.elasticsearch.script.{Script, ScriptType}
object Testing extends App {
val result = updateByScript("testing", "hW7BBnQBn2nWmIjS_b0C", 10.0)
println("######result:---> " + result)
high_level_client.close()
def updateByScript(index: String, id: String, count: Double) = {
//import scala.collection.JavaConversions.mapAsJavaMap
//import collection.JavaConverters._
import scala.collection.JavaConverters._
val updateRequest = new UpdateRequest(index, id)
val params = Map[String, Double]("count" -> count)
val script = new Script(ScriptType.INLINE, "painless", "ctx._source.count += params.count", mapAsJavaMap(params))
updateRequest.script(script)
high_level_client.update(updateRequest, RequestOptions.DEFAULT)
}
}
For the above issue, I have tried the Script Constructor with the idOrCode parameter and which solved my use-case but still I did not get the solution for other Script Constructors.
Working code with Constructor which accept idOrCode parameter.
Code snippet
import org.elasticsearch.action.update.UpdateRequest
import org.elasticsearch.client.RequestOptions
import org.elasticsearch.script.{Script, ScriptType}
object Testing extends App {
val result = updateByScript("testing", "hW7BBnQBn2nWmIjS_b0C", 10.0)
println("######result:---> " + result)
high_level_client.close()
def updateByScript(index: String, id: String, count: Double) = {
val updateRequest = new UpdateRequest(index, id)
val script = new Script(s"""ctx._source.count += $count""")
updateRequest.script(script)
high_level_client.update(updateRequest, RequestOptions.DEFAULT)
}
}
Related
I am new to geb, spock and groovy. The script I am working on is I have a groovy class containing my json. In my groovy class I count how many objects are there in the json and for each object I read key values and then I have another unit testSpec in spock and Geb where I have create my login test script to login to the application which is very simple.
The scenario I am trying to achieve is I want to generate data table in spock test based on data present in json file.
Here what I have achieved till now
My InputDataJson.groovy file
package resources
import geb.spock.GebSpec
import groovy.json.JsonSlurper
import spock.lang.Shared
class InputDataJson extends GebSpec{
#Shared
def inputJSON,
idValue, passwordValue, jsonSize
#Shared
def credsList = []
def setup() {
inputJSON = '''{
"validLogin":{
"username" : "abc",
"password" : "correcttest"
},
"invalidLogin":{
"username" : "xyz",
"password" : "badtest"
}
}'''
def JsonSlurper slurper = new JsonSlurper()
def TreeMap parsedJson = slurper.parseText(inputJSON)
jsonSize = parsedJson.size()
Set keySet = parsedJson.keySet()
int keySetCount = keySet.size()
for(String key : keySet){
credsList.add(new Creds(username: parsedJson[key].username,password:
parsedJson[key].password))
}
}
}
and here is my sample spock geb test
package com.test.demo
import grails.test.mixin.TestMixin
import grails.test.mixin.support.GrailsUnitTestMixin
import pages.LoginPage
import resources.InputDataJson
/**
* See the API for {#link grails.test.mixin.support.GrailsUnitTestMixin} for usage instructions
*/
#TestMixin(GrailsUnitTestMixin)
class SampleTest1Spec extends InputDataJson {
def credentialsList = []
def setup() {
credentialsList = credsList
}
def cleanup() {
}
void "test something"() {
}
def "This LoginSpec test"() {
given:
to LoginPage
when:'I am entering username and password'
setUsername(username)
setPassword(password)
login()
then: "I am being redirected to the homepage"
println("Hello")
where:
[username,password]<< getCreds()
//credsList[0]['username'] | credsList[0]['password']
}
def getCreds(){
println(" CREDS inside " + credsList)
println(" credentialsList : " + credentialsList)
}
}
The problem is when I run this test in debug mode (I understand in spock test first where clause is executed first) my credsList and credentialsList both are coming null and when execution mode reaches to "when" section it fetches the correct user name and password. I am not sure where I am making mistake.
Any help is well appreciated.
Leonard Brünings said:
try replacing setup with setupSpec
Exactly, this is the most important thing. You want something that is initialised before any feature method or iteration thereof starts. So if you want to initialise static or shared fields, this is the way to go.
Additionally, credsList contains Creds objects, not just pairs of user names and passwords. Therefore, if you want those in separate data variables, you need to dereference them in the Creds objects. Here is a simplified version of your Spock tests without any Grails or Geb, because your question is really just a plain Spock question:
package de.scrum_master.stackoverflow.q71122575
class Creds {
String username
String password
#Override
String toString() {
"Creds{" + "username='" + username + '\'' + ", password='" + password + '\'' + '}'
}
}
package de.scrum_master.stackoverflow.q71122575
import groovy.json.JsonSlurper
import spock.lang.Shared
import spock.lang.Specification
class InputDataJson extends Specification {
#Shared
List<Creds> credsList = []
def setupSpec() {
def inputJSON = '''{
"validLogin" : {
"username" : "abc",
"password" : "correcttest"
},
"invalidLogin" : {
"username" : "xyz",
"password" : "badtest"
}
}'''
credsList = new JsonSlurper().parseText(inputJSON)
.values()
.collect { login -> new Creds(username: login.username, password: login.password) }
}
}
package de.scrum_master.stackoverflow.q71122575
import spock.lang.Unroll
class CredsTest extends InputDataJson {
#Unroll("verify credentials for user #username")
def "verify parsed credentials"() {
given:
println "$username, $password"
expect:
username.length() >= 3
password.length() >= 6
where:
cred << credsList
username = cred.username
password = cred.password
}
}
The result in IntelliJ IDEA looks like this:
Try it in the Groovy web console
i am using this filter to save log about request and response time:
import play.api.Logger
import play.api.mvc._
import scala.concurrent.Future
import play.api.libs.concurrent.Execution.Implicits.defaultContext
object LoggingFilter extends Filter {
def apply(nextFilter: (RequestHeader) => Future[SimpleResult])
(requestHeader: RequestHeader): Future[SimpleResult] = {
val startTime = System.currentTimeMillis
nextFilter(requestHeader).map { result =>
val endTime = System.currentTimeMillis
val requestTime = endTime - startTime
Logger.info(s"${requestHeader.method} ${requestHeader.uri} " +
s"took ${requestTime}ms and returned ${result.header.status}")
result.withHeaders("Request-Time" -> requestTime.toString)
}
}
}
however i need to save the request body in the log. i know that this can be done using Scala action composition but i cant figure out how to get this done.
any help??
If you want to access to the request body, you need to override the method apply(EssentialAction) of an EssentialFilter object.
For reference :
Play documentation, "More powerful filters" paragraph
Understanding the Play Filter API, by James Roper
I am using the datastax java drver for Cassandra from scala (2.10.4)
to build batches of prepared statements
but have hit the following problem.
Table definition in CQL
use ks;
drop table bc_test;
create table bc_test(
id TEXT,
id_1 BIGINT,
c1 COUNTER,
c2 COUNTER,
PRIMARY KEY(id, id_1)
);
First problem is implicit numeric widening error
import com.datastax.driver.core.{ResultSet, ProtocolOptions, Cluster, Session}
import com.datastax.driver.core.RegularStatement
import com.datastax.driver.core.PreparedStatement
import com.datastax.driver.core.BoundStatement
import com.datastax.driver.core.BatchStatement
val c = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build
val sess = c.connect("ks")
val p_bc_test_c1: PreparedStatement =
sess.prepare("UPDATE bc_test SET c1 = c1 + ? WHERE id = ? and id_1 = ?")
val batch: BatchStatement = new BatchStatement(BatchStatement.Type.COUNTER);
val id1: Long = 1L
val idText: String = "one"
val c1: Long = 1L
batch.add(p_bc_test_c1.bind(c1, "one", id1))
// which gives the error
scala> batch.add(p_bc_test_c1.bind(c1, "one", id1))
<console>:19: error: implicit numeric widening
batch.add(p_bc_test_c1.bind(c1, "one", id1))
^
I can get around this using type ascription but wanted to find something a bit nicer
that can also be built into a more generic solution. What I came up with was :
import scalaz._
import Scalaz._
import shapeless._
import poly._
import syntax.std.tuple._
trait CassandraLongVal
type CassandraLong = java.lang.Long ## CassandraLongVal
def cassandraLongCol(c : Long): CassandraLong = Tag(c)
trait CassandraIntVal
type CassandraInt = java.lang.Integer ## CassandraIntVal
def cassandraIntCol(c : Int): CassandraInt = Tag(c)
object ToCassandraType extends Poly1 {
implicit def caseInt = at[Int](cassandraIntCol(_))
implicit def caseLong = at[Long](cassandraLongCol(_))
implicit def caseString = at[String](identity)
}
case class Update1(cval: Long, id: String, id1: Long)
val update1Gen = Generic[Update1]
val bUpdate1 = Update1(125L, "two", 1L)
val update1AsArray = update1Gen.to(bUpdate1).map(ToCassandraType).toArray
batch.add(p_bc_test_c1.bind(update1AsArray:_*))
batch.add(p_bc_test_c1.bind(update1AsArray:_*))
sess.execute(batch)
This seems to work quite happily and I can use the basic pattern for dozens of table with different columns.
Is this a reasonable approach, or is there a simpler way to get the same outcome whilst trying to keep a handle on the types
and am I abusing scalaz/ shapeless through my (considerable) ignorance.
I am developing an android application using JAVA. All I want is to
record a song and generate its hash(CODE), then query the echoprint server for a match.
If a match is not found, then upload it to the server (ingest) for future references.
I have been able to achieve the first part. Can someone suggest me about the second part in JAVA? (P.S. : I've seen how to do it using python codes - but that won't be helpful in my case.)
Another question, may I achieve the second objective with the global echoprint server? Or, do I need to set up one of my own?
The references I've used are:
http://masl.cis.gvsu.edu/2012/01/25/android-echoprint/
https://github.com/gvsumasl/EchoprintForAndroid
To insert a song into the echoprint server database, all you need to do is call the ingest method. Basically, it is only a HTTP POST request with correct json body. Here is a Scala code (Java would be very similar) that I am using for that:
import EasyJSON.JSON
import EasyJSON.ScalaJSON
import dispatch.Defaults.executor
import dispatch._
class EchoprintAPI {
val API_URL = "http://your.api.server"
def queryURL(code: String) = url(s"$API_URL/query?fp_code=$code")
def query(code: String): scala.concurrent.Future[ScalaJSON] = {
jsonResponse(queryURL(code))
}
def ingest(json: ScalaJSON, trackId: String): scala.concurrent.Future[ScalaJSON] = {
val metadata = json("metadata")
val request = url(s"$API_URL/ingest").POST
.addParameter("fp_code", json("code").toString)
.addParameter("artist", metadata("artist").toString)
.addParameter("release", metadata("release").toString)
.addParameter("track", metadata("title").toString)
.addParameter("codever", metadata("version").toString)
.addParameter("length", metadata("duration").toString)
.addParameter("genre", metadata("genre").toString)
.addParameter("bitrate", metadata("bitrate").toString)
.addParameter("source", metadata("filename").toString)
.addParameter("track_id", trackId)
.addParameter("sample_rate", metadata("sample_rate").toString)
jsonResponse(request)
}
def delete(trackId: String): scala.concurrent.Future[ScalaJSON] = {
jsonResponse(url(s"$API_URL/query?track_id=$trackId").DELETE)
}
protected def jsonResponse(request: dispatch.Req): scala.concurrent.Future[EasyJSON.ScalaJSON] = {
val response = Http(request OK as.String)
for (c <- response) yield JSON.parseJSON(c)
}
}
To generate the fingerprint code, you can use echoprint-codegen command line call or use the Java JNI integration with C lib
I have this on my app/scripts folder (I created this folder inside app/). I'm not sure how I can properly set the classpath here, thus I didn't even run this to know if it will actually connect to the database. How can I run this in a clean way from the command line?
package scripts
import scala.collection.TraversableOnce
import scala.collection.generic.SeqForwarder
import scala.io.Source
import scala.slick.jdbc.{StaticQuery => Q}
import scala.slick.session.Session
import scala.slick.session.Database
import play.api.db.DB
import tables.Campeonatos
import tables.Jogos
import org.postgresql.Driver
import play.api.test._
import play.api.test.Helpers._
// ...
class InsertJogosCSV extends App {
val dao = new DAO()
val application = FakeApplication()
def insertJogos(csv: CSV)(implicit s: Session) = {
val times = dao.getTimeIdByNameMap
var count = 0
csv foreach { case cols =>
count += 1
dao.insertJogo(cols, times)
}
count
}
val csvFilePath: String = args(0)
val csv = new CSV(csvFilePath)
csv.printLines
running(application) {
val realDatabase = Database.forDataSource(DB.getDataSource()(application))
implicit val s = realDatabase.createSession
insertJogos(csv)
}
}
I've made a blog post explaining my final solution. Should work as an answer to the question.
http://blog.felipe.rs/2014/05/15/run-maintenance-scripts-in-the-context-of-a-running-play-framework-application/
You could achieve this by using the play test:console command at the root of your app. First you could probably move the code into a main method rather than extending App:
class InsertJogosCSV {
def main(args: Array[String]) {
val dao = new DAO()
val application = FakeApplication()
def insertJogos(csv: CSV)(implicit s: Session) = {....}
....
}
}
then run the play test:console command and do the following
scala> import scripts.InsertJogosCSV
import scripts.InsertJogosCSV
scala> val insert = new InsertJogosCSV()
insert: scripts.InsertJogosCSV = scripts.InsertJogosCSV#7d5f9d2b
scala> insert.main
res0: .....
The play test:console by default adds everything from the app folder to your class path as well as the FakeApplication context that you need for your script. Hope that helps.
Similar question: https://stackoverflow.com/a/11297578/2556428
I am using another approach , for similar task.
Create empty subproject in main playframework project.
In build.sbt it looks like
// Main project
lazy val root = (project in file(".")).enablePlugins(play.PlayScala).enablePlugins(com.typesafe.sbt.web.SbtWeb)
// Utils sbt project with scripts
lazy val sjutil = project.aggregate(root).dependsOn(root)
and sjutil/build.sbt like normal sbt project with extra deps if needed, for me it was akka-remote
name := "sjutil"
version := "1.0-SNAPSHOT"
scalaVersion := "2.11.1"
libraryDependencies += "com.typesafe.akka" %% "akka-remote" % "2.3.4"
You can place some App direct in sjutil/ folder
sjutil/actorsstarter.scala:
object actorsstarter {
lazy val logger = play.api.Logger("actorsstarter")
def main(args: Array[String]) {
// read config from conf/application.conf of main project
val remoteConfig = ConfigFactory.load.getConfig("botstarter")
val system = ActorSystem("application",remoteConfig)
val path = "akka.tcp://application#127.0.0.1:2553/user"
....
logger.info("Started")
}
}
after that you can run this script with:
./activator sjutil/run
and make everything you can do with normal project: stage, dist and etc.