I am new to snmp and using snmp4j to create an snmp agent.My java application needs to listen to snmp request and query the db based on the incoming oid and send a response back. I have a src code for snmp agent. But how does the agent query the db based on the incoming oid? Do I need to register all oids in my db as managed objects in the agent so the agent can do the look up when the request arrives? or in other words, how do i point to my datastore/db from the agent?
this is the code i am using.
http://shivasoft.in/blog/java/snmp/creating-snmp-agent-server-in-java-using-snmp4j/
`List oidList = impl.getOidList(); //get data from db
for (Oid oid : oidList) {
agent.registerManagedObject(MOScalarFactory.createReadOnly(new OID(
oid.getOid()), oid.getValue()));
}'
I am trying to register the managed objects with data in db. Is this correct?
I am getting duplicate registration exception on second row though the oid is unique.
`.1.3.6.1.4.1.1166.1.6.1.2.2.1.3.1.1
.1.3.6.1.4.1.1166.1.6.1.2.2.1.3.1.2`
I dont think this is the right way because what if the db is huge?
Any help/tips are greatly appreciated.
Problem
You get org.snmp4j.agent.DuplicateRegistrationException because there can be only one ManagedObject in the ContextScope. Each registration assigns ManagedObject value to the MOContextScope. Second registration tries to set second object to the contextScope. The scope is already filled and thus exception is thrown.
Althow each scalar value SHOULD end up with .0. You may check any MIB browser like iReasoning and pick any value. If this value is scalar - trailing zero is appended automaticaly despite the fact it is not mentioned in MIB-file. So the most "correct" way is to use 4.1 solution.
Solution 1 - own MOScalar
Write your own MOScalar. With tinier bounds.
You should overwrite getLowerBound, getUpperBound, isLowerIncluded, isUpperIncluded to get separate contextScopes for your objects.
I would suggest to return Scalar OID every time and include both boundaries.
Upper and lower boundaries better return same OID you've settled.
Solution 2 - own MOServer
Write your own MOServer. With blackJack and others...
Mostly you can simply copypaste code despite this one
private SortedMap<MOScope, ManagedObject> registry;
It should look like this
private SortedMap<MOScope, Set<ManagedObject>> registry;
And it would affect registration, unregistration and other logic.
DefaultMOServer - 678 lines incl. comments. In fact you should fix several classes:
query.matchesQuery(object)
private boolean matchesQuery(MOQuery query, ManagedObject object) {
if ((query.matchesQuery(object)) && object.getScope().isOverlapping(query.getScope()))
if (object instanceof MOScalar) {
MOScalar moScalar = (MOScalar) object;
return query.getLowerBound().compareTo(moScalar.getID()) <= 0 &&
query.getUpperBound().compareTo(moScalar.getID()) >= 0;
} else {
return true;
}
return false;
}
protected void fire...Event(ManagedObject objects, MOQuery query) {
protected void fire...Event(Set<ManagedObject> objects, MOQuery query) {
if (lookupListener != null) {
for (ManagedObject mo : objects) {
ManagedObject other = lookup(new DefaultMOQuery(contextScope));
Set<ManagedObject> other = lookup(new DefaultMOQuery(contextScope), false);
And so on...
Solution 3 - Tables
Use table rows.
You can add a table and append rows.
You would be able to access cells as the
<tableEntryOID>.<columnSubID>.<rowIndexOID>
You may use this question as tutorial.
Solution 4 - OIDs fixup
Make you oid's use different contextScopes.
Solution 4.1
Adding trailing zero
agent.registerManagedObject(
MOScalarFactory.createReadOnly(
new OID(oid.getOid()).successor(),
oid.getValue()
)
);
This will append .0 to the same-level properties.
snmpget -v2c -c public localhost:2001 oid.getOid().0
Also any MIBbrowser will append .0 to each scalar oid defined in the MIB-file. You may check it with iReasoning as the most popular browser. Even hrSystemUptime (.1.3.6.1.2.1.25.1.1 - see left bottom) is requested as hrSystemUptime.0 (.1.3.6.1.2.1.25.1.1.0) on the top.
Solution 4.2
Separate OID at the base.
static final OID sysDescr1 = new OID("1.3.6.1.4.1.5.6.1.8.9"),
sysDescr2 = new OID("1.3.6.1.4.1.5.6.2.2.5");
Fix the database OIDs to get separate contextScopes.
In addition
You can try reading SNMP4J-Agent-Instrumentation-Guide.pdf. This didn't helped me btw.
You can attach the sources to your IDE to read about zero trailer and other nuances. This helped me a lot to get more info about DefaultMOServer.
Correct pom.xml import to get latest version
<repositories>
<repository>
<id>SNMP4J</id>
<url>https://oosnmp.net/dist/release/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.snmp4j</groupId>
<artifactId>snmp4j-agent</artifactId>
<version>2.3.2</version>
</dependency>
</dependencies>
First of all, OIDs do start with a number - not a dot. The syntax you are using is from NET-SNMP and is non-standard.
Second, please read the SNMP4J-Agent-Instrumentation-Guide.pdf document that describes in detail how you can instrument an agent for a MIB. You got the duplicate registration exception, because you registered a scalar as a sub-tree. Scalar OIDs have to end with the ".0" instance suffix.
Using the CommandResponder interface is sort of reinventing the wheel. You will most likely never manage to implement a secure and standard conform SNMP agent when you start from scratch. Using SNMP4J-Agent and its instrumentation hooks will save you a lot of work and trouble.
Related
I'm deploying a little backend with some methods. One of them makes a simple query to retrieve a list of objects. This is the method:
#ApiMethod(path = "getMessagesByCity", name = "getMessagesByCity", httpMethod = ApiMethod.HttpMethod.POST)
public MessageResponse getMessagesByCity(#Named("City_id") Long city) {
MessageResponse response = new MessageResponse();
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
response.response = 200;
return response;
}
And this is the Message class:
#Entity
public class Message {
#Id
private Long id;
private String name;
#Index
private Long city;
...
}
I've read a lot of posts and all of them are mentioning that probably is caused because datastore-indexes.xml are not being updated automatically. However, Google doc says this (https://cloud.google.com/appengine/docs/standard/python/config/indexconfig):
Every Cloud Datastore query made by an application needs a
corresponding index. Indexes for simple queries, such as queries over
a single property, are created automatically.
So,following that, I think that index related files are not necessary for me.
If I execute the method "getMessagesByCity" with the simple query:
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
The backend returns me an error 503 with this log message:
"com.google.appengine.api.datastore.DatastoreNeedIndexException: no
matching index found. An index is missing but we are unable to tell
you which one due to a bug in the App Engine SDK. If your query only
contains equality filters you most likely need a composite index on
all the properties referenced in those filters."
Any idea? How can I solve it?
You need to upload index configs in, so Datastore will start to accept your queries with custom projections with this command.
gcloud app deploy index.yaml
See https://cloud.google.com/datastore/docs/concepts/indexes for more information about Datastore queries handling and indexes.
Every time you use a new datastore query in your code with a different set of filters / orders etc. your index.yaml should automatically update, (might need to run that logic at least once in the local dev server for it to add the new index to the file)
On local dev, the first time you hit it, it should work, HOWEVER when deploying new indexes, there is a lag time before it becomes available in production / on the appspot server. We have run into this a lot, and from the google console you can actually see if its in progress or not, by going to Datastore > Indexes (https://console.cloud.google.com/datastore/indexes) for the project in question.
If all indexes have a green tick and the issue persists then this is not the issue, and can debug further, however if some have spinners next to them this means this index is still being made and cannot be used until its finished.
If this is your problem, you can avoid it in the future by deploying the index.yaml first through gcloud and then only deploying your application.
alternatively make sure you have run the new method / function on your local, and make sure that the index.yaml did in-fact get changed, if you use Git or something the file should have popped up as modified after the local server ran the function / method.
I am processing an avro file with a list of records and doing a client.put for each record to my local Aerospike store.
For some reason, put for a certain number of records is succeeding and it's not for the rest. I am doing this -
client.put(writePolicy, recordKey, bins);
The related values for the failed call are -
namespace = test
setname = test_set
userkey = some_string
write policy = null
Bins -
is_user:1
prof_loc:530049,530046,530032,530031,530017,530016,500046
rfm:Platinum
store_browsed:some_string
store_purch:some_string
city_id:null
Log Snippet -
com.aerospike.client.AerospikeException: Error Code 4: Parameter error
at com.aerospike.client.command.WriteCommand.parseResult(WriteCommand.java:72)
at com.aerospike.client.command.SyncCommand.execute(SyncCommand.java:56)
at com.aerospike.client.AerospikeClient.put(AerospikeClient.java:338)
What could possibly be the issue?
Finally. Resolved!
I was using the REPLACE RecordsExistsAction in this case. Any bin with null value will fail in this configuration. Aerospike treats a null value in a bin as equivalent to removing that bin value from the record for a key. Thus REPLACE configuration doesn't make sense for such an operation, and hence a parameter error - Invalid DB operation.
UPDATE config on the other hand will work perfectly fine.
Aerospike allows reading and writing with great flexibility. For developers to harness this functionality, Aerospike exposes great number of variables on both Policy and WritePolicy, which at times can be intimidating and error prone to beginners. Parameter error simply means that some of the configurations are not in coherence. An easy start would be to use the default read or write policy, which can be obtained by passing null as the policy.
Eg:
aeroClient.put(null, key, new Bin("binName", object));
Below is aerospike put method code snippet
public final void put(WritePolicy policy, Key key, Bin... bins) throws AerospikeException {
if (policy == null) {
policy = writePolicyDefault;
}
WriteCommand command = new WriteCommand(cluster, policy, key, bins, Operation.Type.WRITE);
command.execute();
}
I recently got this error, because the expiration value that I was using in writePolicy was more than the default expiration time for the cache
I am very new to xpages. I have been searching the web for an answer to my question for a while now. Seems like the answer should be simple.
I have been playing around with a snippet of code that I got from Brad Balassaitis's excellent Xcellerent.net site that populates a list of "jumptoitems" for a viewpanel dynamically. The code is run from the beforeRenderResponse event of the xpage.
var viewName = getComponent('viewPanel1').getData().getViewName();
var vw = database.getView(viewName);
var colNum = 1;
var cols:Vector = vw.getColumns();
for (var i=0; i < cols.length; i++) {
if (cols[i].isSorted() && !cols[i].isHidden()) {
colNum = i + 1;
break;
}
}
var letters = #DbColumn(null, viewName, colNum);
var options = #Trim(#Unique(#UpperCase(#Left(letters, 1))))
viewScope.put('jumpToOptions', options);
It works beautifully - but I want to modify the code to reference a view in a different database. In the post Brad says that the code can be "enhanced" to accomplish this. But I have been experimenting and searching for a while and cannot accomplish the enhancement.
Thanks for any help.
--Lisa&
In your second line, you establish a handle on the view by viewName you pull from the component viewPanel1. Your call is database.getView(viewName). This amounts to a programmatic reference of NotesDatabase.getView(). If you get a handle on the other database you want to connect to, they you can invoke the same .getView() call on that handle.
First, establish your connection to the other database; this is done via the session keyword (which is a NotesSession), as such:
var extDB = session.getDatabase(dbName)
As Howard points out, that session keyword is the current user's session and will be subject to all ACL rights/assignments/roles as that user. If you need to elevate privileges to programmatically expose additional data, you can do so with the sessionAsSigner keyword (which is also a NotesSession, just with the credentials of the signer, yourself, or you can have the NSF signed as the server ID, to give it even higher privileges).
Then proceed as usual with your extDB handle in place of the database keyword (which is about the same as session.getCurrentDatabase()); like so:
var vw = extDB.getView(viewName)
The NotesDatabase.getView() call will return null if a View by that name doesn't exist in that NSF, so you'll want to ensure that it's there and programmatically check for and handle a null return.
[Edit]
Since you're using the ported # function of #DbColumn as it is, to use the approach as Frantisek Kossuth suggests may be easy, but relies on the NotesSession of the current user. To override that user's (lack of) privileges and get full visibility of all documents' values in the separate NSF, you would still need to get a handle on the columnValues for the View as shown above, using the sessionAsSigner keyword.
[/Edit]
Based on your code you need to specify database in #DbColumn formula, too.
var letters = #DbColumn([database], viewName, colNum);
You can read about it here or there...
Aside of documented formats you can use API format "server!!database" as single string value.
I'm new to Drools Expert, currently from the Sample project of the drools what I can only do is print something to the console. Now I integrate drools to a web project and It was successful, I was be able to print something to the console depending on the interaction of the user to the page.
My rules currently is like this:
rule "A test Rule"
when
m: FLTBean ( listeningScore == 1, test : listeningScore )
then
System.out.println( test );
end
So what if I want to print it out to a web page? How would I do that? Do I need to use return to return some value back to the java page and render it to the page?
In order to display something on a web page, then you need to be using the API to invoke Drools and get some output, which can then be rendered by your web application.
Therefore, you need to consider how to get output from it within your Java code. There are a few ways of doing this.
For example, when performing a simple action such as validating a request, then just operate on the request which you insert. For instance:
rule "IBAN doesn't begin with a country ISO code."
no-loop
when
$req: IbanValidationRequest($iban:iban, $country:iban.substring(0, 2))
not Country(isoCode == $country) from countryList
then
$req.reject("The IBAN does not begin with a 2-character country code. '" + $country + "' is not a country.");
update($req);
end
In that example, I'm calling a "reject" method on the fact which I inserted. That modifies the inserted fact, so that after rules execution, I have an object in my Java code, with a flag to indicate whether it was rejected or not. This method works well for stateless knowledge sessions. i.e.
Java code - Insert request fact via API
Drools rule - Modify the request fact (flag rejection, annotate, set properties, etc)
Java code - Look at the fact to see what was done to it
The following code example of how to perform this interaction is taken from the following full colass:
https://github.com/gratiartis/sctrcd-payment-validation-web/blob/master/src/main/java/com/sctrcd/payments/validation/payment/RuleBasedPaymentValidator.java
// Create a new knowledge session from an existing knowledge base
StatelessKnowledgeSession ksession = kbase.newStatelessKnowledgeSession();
// Create a validation request
PaymentValidationRequest request = new PaymentValidationRequest(payment);
// A stateless session is executed with a collection of Objects, so we
// create that collection containing just our request.
List<Object> facts = new ArrayList<Object>();
facts.add(request);
// And execute the session with that request
ksession.execute(facts);
// At this point, rules such as that above should have been activated.
// The rules modify the original request fact, setting a flag to indicate
// whether it is valid and adding annotations to indicate if/why not.
// They may have added annotations to the request, which we can now read.
FxPaymentValidationResult result = new FxPaymentValidationResult();
// Get the annotations that were added to the request by the rules.
result.addAnnotations(request.getAnnotations());
return result;
An alternative in a stateful session would be that rules could insert facts into working memory. After executing the rules, you can then query the session via the API and retrieve one or more result objects. You can get all facts in the session using the getObjects() method of the KnowledgeSession. To get facts with particular properties, there is also a getObjects(ObjectFilter) method. The project linked below has examples of using these methods in the KnowledgeEnvironment and DroolsUtil classes.
Alternatively, you could insert a service as a global variable. The rules could then invoke methods on that service.
For an example of how to use Drools within a web application, I knocked up this web site recently, which provides a REST API to invoke Drools rules and get responses.
https://github.com/gratiartis/sctrcd-payment-validation-web
If you have Maven installed, you should be able to try it out pretty quickly, and play around with the code.
I have a Requirement to make an IMAP client as a Web application
I achieved the functionality of Sorting as:
//userFolder is an Object of IMAPFolder
Message[] messages = userFolder.getMessages();
Arrays.sort(messages, new Comparator<Message>()
{
public int compare(Message message1, Message message2)
{
int returnValue = 0;
try
{
if (sortCriteria == SORT_SENT_DATE)
{
returnValue = message1.getSentDate().compareTo(message2.getSentDate());
}
} catch (Exception e)
{
System.out.println(e.getMessage());
e.printStackTrace();
}
if (sortType == SORT_TYPE_DESCENDING)
{
returnValue = -returnValue;
}
return returnValue;
}
});
The code snippet is not complete , its just brief
SORT_SENT_DATE,SORT_TYPE_DESCENDING are my own constants.
Actually This solution is working fine, but it fails in logic for paging
Being a Web based application, i cant expect server to load all messages for every user and sort them
(We do have situations >1000 Simultaneous users with mail boxes having > 1000 messages each )
It also does not make sense for the web server to load all, sort them, return just a small part (say 1-20),
and on the next request, again load all sort them and return (21-40). Caching possible, but whts the gaurantee user would actually make a request ?
I heard there is a class called FetchProfile, can that help me here ? (I guess it would still load all messages but just the information thats required)
Is there any other way to achieve this ?
I need a solution that could also work in Search operation (searching with paging),
I have built an archietecture to create a SearchTerm but here too i would require paging.
for ref, i have asked this same Question at :
http://www.coderanch.com/t/461408/Other-JSE-JEE-APIs/java/it-possible-use-IMAP-paging
You would need a server with the SORT extension and even that may not be enough. Then you issue SORT on the specific mailbox and FETCH only those message numbers that fall into your view.
Update based on comments:
For servers where the SORT extension is not available the next best thing is to FETCH header field representing the sort key for all items (eg. FETCH 1:* BODY[HEADER.FIELDS(SUBJECT)] for subject or FETCH 1:* BODY[HEADER.FIELDS(DATA)] for sent date), then sort based on the key. You will get a list of sorted message number this way, which should be equivalent to what the SORT command would return.
If server side cache is allowed then the best way is to keep cache of envelopes (in the IMAP ENVELOPE sense) and then update it using the techniques described in RFC 4549. It's easy to sort and page given this cache.
There are two IMAP APIs on Java - the official JavaMail API and Risoretto. Risoretto is more low-level and should allow to implement anything described above, JavaMail may be able to do so as well, but I don't have much experience with it.