I'm having trouble finding examples of what I'm trying to do...
I'd like to create a Lambda function in Java. I thought I'd always use Javascript for Lambda functions, but in this case I'll end up re-using application logic already written in Java, so it makes sense.
In the past I've written Javascript Lambda functions that are triggered by Kinesis events. Super simple, function receives the events as a parameter, do something, voila. I'd like to do the same thing with Java. Really simple :
Kinesis Event(s) -> Trigger Function -> (Java) Receive Kinesis Events, do something with them
Anyone have experience with this kind of use case?
Here is some sample code I wrote to demonstrate the same concept internally. This code forwards events from one stream to another.
Note this code does not handle retries if there are errors in forwarding, nor is it meant to be performant in a production environment, but it does demonstrate how to handle the records from the publishing stream.
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.kinesis.AmazonKinesisClient;
import com.amazonaws.services.kinesis.model.PutRecordsRequest;
import com.amazonaws.services.kinesis.model.PutRecordsRequestEntry;
import com.amazonaws.services.kinesis.model.PutRecordsResult;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class KinesisToKinesis {
private LambdaLogger logger;
final private AmazonKinesisClient kinesisClient = new AmazonKinesisClient();
public PutRecordsResult eventHandler(KinesisEvent event, Context context) {
logger = context.getLogger();
if (event == null || event.getRecords() == null) {
logger.log("Event contains no data" + System.lineSeparator());
return null;
} else {
logger.log("Received " + event.getRecords().size() +
" records from " + event.getRecords().get(0).getEventSourceARN() + System.lineSeparator());
}
final Long startTime = System.currentTimeMillis();
// set up the client
Region region;
final Map<String, String> environmentVariables = System.getenv();
if (environmentVariables.containsKey("AWS_REGION")) {
region = Region.getRegion(Regions.fromName(environmentVariables.get("AWS_REGION")));
} else {
region = Region.getRegion(Regions.US_WEST_2);
logger.log("Using default region: " + region.toString() + System.lineSeparator());
}
kinesisClient.setRegion(region);
Long elapsed = System.currentTimeMillis() - startTime;
logger.log("Finished setup in " + elapsed + " ms" + System.lineSeparator());
PutRecordsRequest putRecordsRequest = new PutRecordsRequest().withStreamName("usagecounters-global");
List<PutRecordsRequestEntry> putRecordsRequestEntryList = event.getRecords().parallelStream()
.map(r -> new PutRecordsRequestEntry()
.withData(ByteBuffer.wrap(r.getKinesis().getData().array()))
.withPartitionKey(r.getKinesis().getPartitionKey()))
.collect(Collectors.toList());
putRecordsRequest.setRecords(putRecordsRequestEntryList);
elapsed = System.currentTimeMillis() - startTime;
logger.log("Processed " + putRecordsRequest.getRecords().size() +
" records in " + elapsed + " ms" + System.lineSeparator());
PutRecordsResult putRecordsResult = kinesisClient.putRecords(putRecordsRequest);
elapsed = System.currentTimeMillis() - startTime;
logger.log("Forwarded " + putRecordsRequest.getRecords().size() +
" records to Kinesis " + putRecordsRequest.getStreamName() +
" in " + elapsed + " ms" + System.lineSeparator());
return putRecordsResult;
}
}
Related
I have a client sending in a JWKS like this:
{
"keys": [
"kty": "RSA",
"use": "sig",
"alg": "RS256",
"kid": "...",
"x5t": "...",
"custom_field_1": "this is some content",
"custom_field_2": "this is some content, too",
"n": "...",
"x5c": "..."
]
}
Using the com.nimbusds.jose.jwk.JWKSet, I'd like to rip through the keys via the getKeys() method, which gives me com.nimbusds.jose.jwk.JWK objects and read those custom fields. I need to perform some custom logic based on these fields.
Is there a way to do this?
There does not appear to be a way to deal with this with standard libraries. I was unable to find anything that indicates that a custom parameter in a JWK is legitimate but the Jose library seems to just ignore it. So the only thing I can find is to read it "by hand". Something like:
import com.jayway.jsonpath.JsonPath;
import java.util.HashMap;
import java.util.List;
public class JWKSParserDirect {
private static final String jwks = "{\"keys\": [{" +
"\"kty\": \"RSA\"," +
"\"use\": \"sig\"," +
"\"alg\": \"RS256\"," +
"\"e\": \"AQAB\"," +
"\"kid\": \"2aktWjYabDofafVZIQc_452eAW9Z_pw7ULGGx87ufVA\"," +
"\"x5t\": \"5FTiZff07R_NuqNy5QXUK7uZNLo\"," +
"\"custom_field_1\": \"this is some content\"," +
"\"custom_field_2\": \"this is some content, too\"," +
"\"n\": \"foofoofoo\"," +
"\"x5c\": [\"blahblahblah\"]" +
"}" +
"," +
"{" +
"\"kty\": \"RSA\"," +
"\"use\": \"sig\"," +
"\"alg\": \"RS256\"," +
"\"e\": \"AQAB\"," +
"\"kid\": \"2aktWjYabDofafVZIQc_452eAW9Z_pw7ULGGx87ufVA\"," +
"\"x5t\": \"5FTiZff07R_NuqNy5QXUK7uZNLo\"," +
"\"custom_field_1\": \"this is some content the second time\"," +
"\"custom_field_2\": \"this is some content, too and two\"," +
"\"n\": \"foofoofoo\"," +
"\"x5c\": [\"blahblahblah\"]" +
"}]}";
#SuppressWarnings("unchecked")
public static void main(String[] argv) {
List<Object> keys = JsonPath.read(jwks, "$.keys[*]");
for (Object key : keys) {
HashMap<String, String> keyContents = (HashMap<String, String>) key;
System.out.println("custom_field_1 is \"" + keyContents.get("custom_field_1") + "\"");
System.out.println("custom_field_2 is \"" + keyContents.get("custom_field_2") + "\"");
}
}
}
or, to go direct to the JWK:
import com.jayway.jsonpath.JsonPath;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
import java.util.HashMap;
import java.util.List;
public class JWKSParserURL {
#SuppressWarnings("unchecked")
public static void main(String[] argv) {
try {
URL url = new URL("https://someserver.tld/auth/realms/realmname/protocol/openid-connect/certs");
URLConnection urlConnection = url.openConnection();
InputStream inputStream = urlConnection.getInputStream();
List<Object> keys = JsonPath.read(inputStream, "$.keys[*]");
for( Object key: keys) {
HashMap<String, String> keyContents = (HashMap<String, String>)key;
System.out.println("custom_field_1 is \"" + keyContents.get("custom_field_1") + "\"");
System.out.println("custom_field_2 is \"" + keyContents.get("custom_field_2") + "\"");
}
}
catch (IOException ioe) {
ioe.printStackTrace(System.err);
}
}
}
There isn't a way that I can find to have a regex for the Json Path key so you'll need to grab them with the full path. You can also have something like:
List<String> customField1 = JsonPath.read(jwks, "$.key[*].custom_field_1");
to get a list of the "custom_field_1" values. To me this is more difficult as you get all of the custom field values separately and not within each key.
Again, I'm not finding support for custom JWK fields anywhere. JWT - no problem but not JWK. But if you've got this I think you'll need to extract these fields without standard libraries.
Is there a way to determine the number of batches that were created by a Kafka producer for a specific set of messages? For instance if I am sending 10K messages in a loop , is there a way to check how many batches were sent? I set the "batch.size" to a high value and my expectation was that the message will be buffered and there will be a delay in seeing the message in my consumer. However this seems to be printed almost immediately in my consumer program.
The default value if batch.size is 16384. Is this number of bytes?
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
public class KafkaProducerApp {
public static void main(String[] args){
Properties properties = new Properties();
properties.put("bootstrap.servers","localhost:9092,localhost:9093,localhost:9094");
properties.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
properties.put("acks","0");
properties.put("batch.size",33554432);
KafkaProducer<String,String> kafkaProducer = new KafkaProducer<String, String>(properties);
Map<Integer,Integer> partitionCount = new HashMap<Integer,Integer>();
partitionCount.put(0,0);
partitionCount.put(1,0);
partitionCount.put(2,0);
try{
Date from = new Date();
for(int i=0;i<10000;i++) {
RecordMetadata ack = kafkaProducer.send(new ProducerRecord<String, String>("test_topic", Integer.toString(i), "MyMessage" + Integer.toString(i))).get();
//RecordMetadata ack = kafkaProducer.send(new ProducerRecord<String,String>("test_topic",0,Integer.toString(i), "MyMessage" + Integer.toString(i))).get();
System.out.println(" Offset = " + ack.offset());
System.out.println(" Partition = " + ack.partition());
partitionCount.put(ack.partition(),partitionCount.get(ack.partition())+1);
}
Date to = new Date();
System.out.println(" partition 0 =" + partitionCount.get(0));
System.out.println(" partition 1 =" + partitionCount.get(1));
System.out.println(" partition 2 =" + partitionCount.get(2));
System.out.println(" Elapsed Time = " + (to.getTime()-from.getTime())/1000);
} catch (Exception ex){
ex.printStackTrace();
} finally {
kafkaProducer.close();
}
}
}
What you are asking for is the total number of produce requests.
You can see the average number of produce requests per second using the JMX Mbean kafka.producer:type=producer-metrics,client-id=([-.w]+)
While working on my DAG in hazelcast Jet, I stumbled into a weird problem. To check for the error I dumbed down my approach completely and: it seems that the edges are not working according to the tutorial.
The code below is almost as simple as it gets. Two vertices (one source, one sink), one edge.
The source is reading from a map, the sink should put into a map.
The data.addEntryListener correctly tells me that the map is filled with 100 lists (each with 25 objects at 400 byte) by another application ... and then nothing. The map fills up, but the dag doesn't interact with it at all.
Any idea where to look for the problem?
package be.andersch.clusterbench;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.hazelcast.config.Config;
import com.hazelcast.config.SerializerConfig;
import com.hazelcast.core.EntryEvent;
import com.hazelcast.jet.*;
import com.hazelcast.jet.config.JetConfig;
import com.hazelcast.jet.stream.IStreamMap;
import com.hazelcast.map.listener.EntryAddedListener;
import be.andersch.anotherpackage.myObject;
import java.util.List;
import java.util.concurrent.ExecutionException;
import static com.hazelcast.jet.Edge.between;
import static com.hazelcast.jet.Processors.*;
/**
* Created by abernard on 24.03.2017.
*/
public class Analyzer {
private static final ObjectMapper mapper = new ObjectMapper();
private static JetInstance jet;
private static final IStreamMap<Long, List<String>> data;
private static final IStreamMap<Long, List<String>> testmap;
static {
JetConfig config = new JetConfig();
Config hazelConfig = config.getHazelcastConfig();
hazelConfig.getGroupConfig().setName( "name" ).setPassword( "password" );
hazelConfig.getNetworkConfig().getInterfaces().setEnabled( true ).addInterface( "my_IP_range_here" );
hazelConfig.getSerializationConfig().getSerializerConfigs().add(
new SerializerConfig().
setTypeClass(myObject.class).
setImplementation(new OsamKryoSerializer()));
jet = Jet.newJetInstance(config);
data = jet.getMap("data");
testmap = jet.getMap("testmap");
}
public static void main(String[] args) throws ExecutionException, InterruptedException {
DAG dag = new DAG();
Vertex source = dag.newVertex("source", readMap("data"));
Vertex test = dag.newVertex("test", writeMap("testmap"));
dag.edge(between(source, test));
jet.newJob(dag).execute()get();
data.addEntryListener((EntryAddedListener<Long, List<String>>) (EntryEvent<Long, List<String>> entryEvent) -> {
System.out.println("Got data: " + entryEvent.getKey() + " at " + System.currentTimeMillis() + ", Size: " + jet.getHazelcastInstance().getMap("data").size());
}, true);
testmap.addEntryListener((EntryAddedListener<Long, List<String>>) (EntryEvent<Long, List<String>> entryEvent) -> {
System.out.println("Got test: " + entryEvent.getKey() + " at " + System.currentTimeMillis());
}, true);
Runtime.getRuntime().addShutdownHook(new Thread(() -> Jet.shutdownAll()));
}
}
The Jet job is already finished at the line jet.newJob(dag).execute().get(), before you even created the entry listeners. This means that the job runs on an empty map. Maybe your confusion is about the nature of this job: it's a batch job, not an infinite stream processing one. Jet version 0.3 does not yet support infinite stream processing.
Could any one help me in below issue
I want to pass test cases in QC through Java, I used con4j and reached till test sets but I am unable to fetch the test cases under respective test set.
could any one please help me in how to pass test cases in QC through com4j
import com.qc.ClassFactory;
import com.qc.ITDConnection;
import com.qc.ITestLabFolder;
import com.qc.ITestSetFactory;
import com.qc.ITestSetTreeManager;
import com.qc.ITestSetFolder;
import com.qc.IList;
import com.qc.ITSTest;
import com.qc.ITestSet;
import com.qc.ITestFactory;
import com4j.*;
import com4j.stdole.*;
import com4j.tlbimp.*;
import com4j.tlbimp.def.*;
import com4j.tlbimp.driver.*;
import com4j.util.*;
import com4j.COM4J;
import java.util.*;
import com.qc.IRun;
import com.qc.IRunFactory;
public class Qc_Connect {
public static void main(String[] args) {
// TODO Auto-generated method stub
String url="http://abc/qcbin/";
String domain="abc";
String project="xyz";
String username="132222";
String password="Xyz";
String strTestLabPath = "Root\\Test\\";
String strTestSetName = "TestQC";
try{
ITDConnection itd=ClassFactory.createTDConnection();
itd.initConnectionEx(url);
System.out.println("COnnected To QC:"+ itd.connected());
itd.connectProjectEx(domain,project,username,password);
System.out.println("Logged into QC");
//System.out.println("Project_Connected:"+ itd.connected());
ITestSetFactory objTestSetFactory = (itd.testSetFactory()).queryInterface(ITestSetFactory.class);
ITestSetTreeManager objTestSetTreeManager = (itd.testSetTreeManager()).queryInterface(ITestSetTreeManager.class);
ITestSetFolder objTestSetFolder =(objTestSetTreeManager.nodeByPath(strTestLabPath)).queryInterface(ITestSetFolder.class);
IList its1 = objTestSetFolder.findTestSets(strTestSetName, true, null);
//IList ls= objTestSetFolder.findTestSets(strTestSetName, true, null);
System.out.println("No. of Test Set:" + its1.count());
ITestSet tst= (ITestSet) objTestSetFolder.findTestSets(strTestSetName, true, null).queryInterface(ITSTest.class);
System.out.println(tst.name());
//System.out.println( its1.queryInterface(ITestSet.class).name());
/* foreach (ITestSet testSet : its1.queryInterface(ITestSet.class)){
ITestSetFolder tsFolder = (ITestSetFolder)testSet.TestSetFolder;
ITSTestFactory tsTestFactory = (ITSTestFactory)testSet.TSTestFactory;
List tsTestList = tsTestFactory.NewList("");
}*/
/* Com4jObject comObj = (Com4jObject) its1.item(0);
ITestSet tst = comObj.queryInterface(ITestSet.class);
System.out.println("Test Set Name : " + tst.name());
System.out.println("Test Set ID : " + tst.id());
System.out.println("Test Set ID : " + tst.status());
System.out.println("Test Set ID : " );*/
System.out.println(its1.count());
System.out.println("TestSet Present");
Iterator itr = its1.iterator();
System.out.println(itr.hasNext());
while (itr.hasNext())
{
Com4jObject comObj = (Com4jObject) itr.next();
ITestSet sTestSet = comObj.queryInterface(ITestSet.class);
System.out.println(sTestSet.name());
Com4jObject comObj2 = sTestSet.tsTestFactory();
ITestSetFactory test = comObj2.queryInterface(ITestSetFactory.class);
}
// ITSTest tsTest=null;
// tsTest.
//its1.
/* comObj = (Com4jObject) its1.item(1);
ITSTest tst2=comObj.queryInterface(ITSTest.class);*/
// System.out.println( tst2.name());
/* foreach (ITSTest tsTest : tst2)
{
IRun lastRun = (IRun)tsTest.lastRun();
if (lastRun == null)
{
IRunFactory runFactory = (IRunFactory)tsTest.runFactory;
String date = "20160203";
IRun run = (IRun)runFactory.addItem( date);
run.status("Pass");
run.autoPost();
}
}*/
}
catch(Exception e){
e.printStackTrace();
}
}
}
I know the post is quite old. I have to struggle alot in OTA with Java and couldn't get a complete post for solving the issue.
Now i have running code after too much research.
so thought of sharing my code in case someone is looking for help.
Here is complete Solution.
`
ITestFactory sTestFactory = (connection.testFactory())
.queryInterface(ITestFactory.class);
ITest iTest1 = (sTestFactory.item(12081)).queryInterface(ITest.class);
System.out.println(iTest1.execDate());
System.out.println(iTest1.name());
ITestSetFactory sTestSetFactory = (connection.testSetFactory())
.queryInterface(ITestSetFactory.class);
ITestSet sTestSet = (sTestSetFactory.item(1402))
.queryInterface(ITestSet.class);
System.out.println(sTestSet.name() + "\n Test Set ID" + sTestSet.id());
IBaseFactory testFactory1 = sTestSet.tsTestFactory().queryInterface(
IBaseFactory.class);
testFactory1.addItem(iTest1);
System.out.println("Test case has been Added");
System.out.println(testFactory1.newList("").count());
IList tsTestlist = testFactory1.newList("");
ITSTest tsTest;
for (int tsTestIndex = 1; tsTestIndex <= tsTestlist.count(); tsTestIndex++) {
Com4jObject comObj = (Com4jObject) tsTestlist.item(tsTestIndex);
tsTest = comObj.queryInterface(ITSTest.class);
if (tsTest.name().equalsIgnoreCase("[3]TC_OTA_API_Test")) {
System.out.println("Hostname" + tsTest.hostName() + "\n"
+ tsTest.name() + "\n" + tsTest.status());
IRun lastRun = (IRun) tsTest.lastRun();
// IRun lastRun = comObjRun.queryInterface(IRun.class);
// don't update test if it may have been modified by someone
// else
if (lastRun == null) {
System.out.println("I am here last Run = Null");
runFactory = tsTest.runFactory().queryInterface(
IRunFactory.class);
System.out.println(runFactory.newList("").count());
String runName = "TestRun_Automated";
Com4jObject comObjRunForThisTS = runFactory
.addItem(runName);
IRun runObjectForThisTS = comObjRunForThisTS
.queryInterface(IRun.class);
runObjectForThisTS.status("Passed");
runObjectForThisTS.post();
runObjectForThisTS.refresh();
}
}
}
`
Why not build a client to access the REST API instead of passing through the OTA interface?
Once you build a basic client, you can post runs and update their status quite easily.
If you use c#/vb.net this has been easily completed. But you are working on java, I would suggest to provide interface above dlls to deal with operation. This will be much more easier than using com4j.
Similar query, probably following may help you. I would suggest to drop idea of using com4j and use solution provided in thread below which is proven,fail safe and auto-recoverable.
QC API JAR to connect using java
it was always been difficult to use com4j specially for HPQC/ALM. As dlls for QC are faulty and there are memory leaking/allocation problems which crashes dll executions frequently on certain platforms.
I wrote below program to understand how elastic search could be used to do full text search. Here when I search for individual words it works right but I want to search for combinations of words and that is not working.
package in.blogspot.randomcompiler.elastic_search_demo;
import in.blogspot.randomcompiler.elastic_search_impl.Event;
import java.util.Date;
import org.elasticsearch.action.count.CountRequestBuilder;
import org.elasticsearch.action.count.CountResponse;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.FilterBuilders;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import com.fasterxml.jackson.core.JsonProcessingException;
public class ElasticSearchDemo
{
public static void main( String[] args ) throws JsonProcessingException
{
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
DeleteResponse deleteResponse1 = client.prepareDelete("chat-data", "event", "1").execute().actionGet();
DeleteResponse deleteResponse2 = client.prepareDelete("chat-data", "event", "2").execute().actionGet();
DeleteResponse deleteResponse3 = client.prepareDelete("chat-data", "event", "3").execute().actionGet();
Event e1 = new Event("LOGIN", new Date(), "Agent1 logged into chat");
String e1Json = e1.prepareJson();
System.out.println("JSON: " + e1Json);
IndexResponse indexResponse1 = client.prepareIndex("chat-data", "event", "1").setSource(e1Json).execute().actionGet();
printIndexResponse("e1", indexResponse1);
Event e2 = new Event("LOGOUT", new Date(), "Agent1 logged out of chat");
String e2Json = e2.prepareJson();
System.out.println("JSON: " + e2Json);
IndexResponse indexResponse2 = client.prepareIndex("chat-data", "event", "2").setSource(e2Json).execute().actionGet();
printIndexResponse("e2", indexResponse2);
Event e3 = new Event("BREAK", new Date(), "Agent1 went on break in the middle of a chat");
String e3Json = e3.prepareJson();
System.out.println("JSON: " + e3Json);
IndexResponse indexResponse3 = client.prepareIndex("chat-data", "event", "3").setSource(e3Json).execute().actionGet();
printIndexResponse("e3", indexResponse3);
FilterBuilder filterBuilder = FilterBuilders.termFilter("value", "break middle");
SearchRequestBuilder searchBuilder = client.prepareSearch();
searchBuilder.setPostFilter(filterBuilder);
CountRequestBuilder countBuilder = client.prepareCount();
countBuilder.setQuery(QueryBuilders.constantScoreQuery(filterBuilder));
CountResponse countResponse1 = countBuilder.execute().actionGet();
System.out.println("HITS: " + countResponse1.getCount());
SearchResponse searchResponse1 = searchBuilder.execute().actionGet();
SearchHits hits = searchResponse1.getHits();
for(int i=0; i<hits.hits().length; i++) {
SearchHit hit = hits.getAt(i);
System.out.println("[" + i + "] " + hit.getId() + " : " +hit.sourceAsString());
}
client.close();
}
private static void printIndexResponse(String description, IndexResponse response) {
System.out.println("Index response for: " + description);
System.out.println("Index name: " + response.getIndex());
System.out.println("Index type: " + response.getType());
System.out.println("Index id: " + response.getId());
System.out.println("Index version: " + response.getVersion());
}
}
The issue I am facing is that when I search for "break middle" it returns nothing, expectation is that it should return the 3rd event.
I understand that I need to configure a different analyzer rather the default one to make it index appropriately.
Could someone please help me in understanding how to do that. Some complete example would to great to have.
The problem is caused because you are using the Term filter:
FilterBuilder filterBuilder = FilterBuilders.termFilter("value", "break middle");
A Term filter doesn't analyse the data in the query string - so Elasticsearch is looking for the exact string "break middle".
However the third document will probably have been broken down by ES into individual terms as follows:
Agent1
went
on
break
in
the
middle
of
a
chat
to fix the issue, use a filter or query that analyses the string you're passing - for example use a Query_String query or Match query.
For example:
QueryBuilder qb = QueryBuilders.matchQuery("event", "break middle");
or:
QueryBuilder qb = QueryBuilders.queryString("break middle");
See the Java API documentation for Elasticsearch for more info.