How can I limit a boolean to one server?
If I use a normal boolean to create a command that can be disabled: Boolean b = true
and make it changeable with a text command:
if (event.getMessage().getContentStripped().equalsIgnoreCase("message")) {
if (event.getMember().getPermissions().contains(Permission.ADMINISTRATOR)) {
if (b) {
b = false;
event.getChannel().sendMessage("Successfully disabled the command.").queue();
} else if (!b) {
event.getChannel().sendMessage("The command is already disabled.").queue();
}
it gets disabled/enabled for all servers the bot is in. I want people to only disable it for their own server though. How can I do this?
Sry if it's easy. I haven't found anything on Google. I'm not so experienced with coding yet. I'm here to learn :)
To keep track of state per-guild you can use a map datastructure, with the guild's ID as key.
private final Map<Long, Boolean> map = new HashMap<>();
You can then store the boolean using map.put(guild.getIdLong(), false) and later load using map.get(guild.getIdLong()). Note however, that this will not persist between program restarts, since it is only stored in memory. To persist this state, you have to use a database, such as SQLite or similar.
Related
My scenario is, I want to call one stream based on another stream input. Both Stream type is different. The following is my sample code. I want to trigger one stream when some message is received from Kafka stream.
While Application start up, i can read data from DB. Then again i want to get data from DB based on some kafka message. When i receive kafka message in stream , i want to get data from DB again.This is my actual use case.
How to achieve this? Is it possible ?
public class DataStreamCassandraExample implements Serializable{
private static final long serialVersionUID = 1L;
static Logger LOG = LoggerFactory.getLogger(DataStreamCassandraExample.class);
private transient static StreamExecutionEnvironment env;
static DataStream<Tuple4<UUID,String,String,String>> inputRecords;
public static void main(String[] args) throws Exception {
env = StreamExecutionEnvironment.getExecutionEnvironment();
ParameterTool argParameters = ParameterTool.fromArgs(args);
env.getConfig().setGlobalJobParameters(argParameters);
Properties kafkaProps = new Properties();
kafkaProps.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
kafkaProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "group1");
FlinkKafkaConsumer<String> kafkaConsumer = new FlinkKafkaConsumer<>("testtopic", new SimpleStringSchema(), kafkaProps);
ClusterBuilder cb = new ClusterBuilder() {
private static final long serialVersionUID = 1L;
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoint("127.0.0.1")
.withPort(9042)
.withoutJMXReporting()
.build();
}
};
CassandraInputFormat<Tuple4<UUID,String,String,String>> cassandraInputFormat =
new CassandraInputFormat<> ("select * from employee_details", cb);
//While Application is start up , Read data from table and send as stream
inputRecords = getDBData(env,cassandraInputFormat);
// If any data comes from kafka means, again i want to get data from table.
//How to i trigger getDBData() method from inside this stream.
//The below code is not working
DataStream<String> inputRecords1= env.addSource(kafkaConsumer)
.map(new MapFunction<String,String>() {
private static final long serialVersionUID = 1L;
#Override
public String map(String value) throws Exception {
inputRecords = getDBData(env,cassandraInputFormat);
return "OK";
}
});
//This is not printed , when i call getDBData() stream from inside the kafka stream.
inputRecords1.print();
DataStream<Employee> empDataStream = inputRecords.map(new MapFunction<Tuple4<UUID,String,String,String>, Tuple2<String,Employee>>() {
private static final long serialVersionUID = 1L;
#Override
public Tuple2<String, Employee> map(Tuple4<UUID,String,String,String> value) throws Exception {
Employee emp = new Employee();
try{
emp.setEmpid(value.f0);
emp.setFirstname(value.f1);
emp.setLastname(value.f2);
emp.setAddress(value.f3);
}
catch(Exception e){
}
return new Tuple2<>(emp.getEmpid().toString(), emp);
}
}).keyBy(0).map(new MapFunction<Tuple2<String,Employee>,Employee>() {
private static final long serialVersionUID = 1L;
#Override
public Employee map(Tuple2<String, Employee> value)
throws Exception {
return value.f1;
}
});
empDataStream.print();
env.execute();
}
private static DataStream<Tuple4<UUID,String,String,String>> getDBData(StreamExecutionEnvironment env,
CassandraInputFormat<Tuple4<UUID,String,String,String>> cassandraInputFormat){
DataStream<Tuple4<UUID,String,String,String>> inputRecords = env
.createInput
(cassandraInputFormat
,TupleTypeInfo.of(new TypeHint<Tuple4<UUID,String,String,String>>() {}));
return inputRecords;
}
}
this is going to be a very verbose answer.
To correctly use Flink as a developper, you need to have an understading of its basic concepts. I suggest you start by the architecture overview (https://ci.apache.org/projects/flink/flink-docs-release-1.11/concepts/flink-architecture.html), it contains all you need to know in order to get into the world of Flink when you come from programming.
Now, looking at your code, it should not do what you expect because of how Flink will read it. You need to understand that Flink has at least two big steps when it executes your code: first it builds an execution graph which only describes what it needs to do. This happens at the job manager level. The second big step is to ask one or many workers to execute the graph. These two steps are sequential and anything you do regarding the graph description has to be done at the job manager level not inside your operations.
In your case, the graph has:
A Kafak source.
A map that will call getDBData() at a worker level (not good because getDBData() alters the graph by adding a new Input each time it is called).
The line inputRecords = getDBData(env,cassandraInputFormat); will create an orphan branch of the graph. And the line DataStream<Employee> empDataStream = inputRecords.map... will append a branch of a map->keyBy->map to that orphan branch. This will build a part of the graph that will read all the employee records from Cassandra and apply the map->keyBy->map transformations. This will not be linked with the Kafka source in any way.
Now let's get back to your need. I understand you need to fetch data for an employee when his/her id comes from Kafka and do some operations.
The most clean way to handle this is called Side Inputs. This is a data input that you declare when you build your graph and the job manager handles the reading of data and its transmission to the workers. The bad news is that Side Inputs are not yet working for streaming jobs in Flink (https://issues.apache.org/jira/browse/FLINK-2491 - this bug causes streamning jobs to not create checskpoints because side inputs finish quickly and this puts the job in a bizzare state).
With this being said you still have three more options. The right option depends on the size of your employee cassandra table.
The second option is to load all employees to a static final variable employees and use it inside your map functions. The backside of this approach is that the job manager will send a serialized copy of this variable to all workers and may congest your network and may also overload the RAM. If the size of the table is small and should not grow big in the future, then this may be an acceptable work-arround until the Side Inputs are working for streaming jobs. If the size of the table is big or should evolve in the future then consider the third option.
The third option is an improvement of the second one. It uses Flink's broadcast variables (see https://flink.apache.org/2019/06/26/broadcast-state.html and https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html). Short story: it is the same as before with better transfer management. Flink will find the best way to store and send the variable to the workers. This approach is though a litle bit more complicated to implement correctly.
The last option is not advisable in normal circumstances. It simply consists in making a call to Cassandra inside your map operation. This is not a good practice because it adds repeated latency to all your map executions (there will be as many calls as items passing through Kafka). A call means a connection creation, the actual request with the query and waiting for Cassandra to reply and freeing the connection. This can be a lot of work for a step in your graph. It is a solution to consider when you really can not find any alternatives.
For your case, I would advise the third option. I guess the employee table should not be very big and using Broadcast variables is a good choice.
I am trying to get a grasp on Google App Engine programming and wonder what the difference between these two methods is - if there even is a practical difference.
Method A)
public Collection<Conference> getConferencesToAttend(Profile profile)
{
List<String> keyStringsToAttend = profile.getConferenceKeysToAttend();
List<Conference> conferences = new ArrayList<Conference>();
for(String conferenceString : keyStringsToAttend)
{
conferences.add(ofy().load().key(Key.create(Conference.class,conferenceString)).now());
}
return conferences;
}
Method B)
public Collection<Conference> getConferencesToAttend(Profile profile)
List<String> keyStringsToAttend = profile.getConferenceKeysToAttend();
List<Key<Conference>> keysToAttend = new ArrayList<>();
for (String keyString : keyStringsToAttend) {
keysToAttend.add(Key.<Conference>create(keyString));
}
return ofy().load().keys(keysToAttend).values();
}
the "conferenceKeysToAttend" list is guaranteed to only have unique Conferences - does it even matter then which of the two alternatives I choose? And if so, why?
Method A loads entities one by one while method B does a bulk load, which is cheaper, since you're making just 1 network roundtrip to Google's datacenter. You can observe this by measuring time taken by both methods while loading a bunch of keys multiple times.
While doing a bulk load, you need to be cautious about loaded entities, if datastore operation throws exception. Operation might succeed even when some of the entities are not loaded.
The answer depends on the size of the list. If we are talking about hundreds or more, you should not make a single batch. I couldn't find documentation what is the limit, but there is a limit. If it not that much, definitely go with loading one by one. But, you should make the calls asynchronous by not using the now function:
List<<Key<Conference>> conferences = new ArrayList<Key<Conference>>();
conferences.add(ofy().load().key(Key.create(Conference.class,conferenceString));
And when you need the actual data:
for (Key<Conference> keyConference : conferences ) {
Conference c = keyConference.get();
......
}
Ok, I am building an app in client and it needs to take data from DB. The app won't take all data from DB all at once but based on the pagination.
It has a simple textbox for user to enter text and a Button to search data.
Requirements:
-If the system already downloaded the data from a certain pageNo, then it won't call to server again.
-Each time it successfully called to server it needs to remember the pageNo, so that next time when user searching for that exact term it
will search for pageNo=pageNo+1 cos we searched for pageNo
already.
So here is what i did:
private HashMap<String, Integer> wordPageNoHashMap=new HashMap<String, Integer>();
button.addClickHandler(new ClickHandler(){
#Override
public void onClick(ClickEvent event) {
int pageNo=0;
if(wordPageNoHashMap.containsKey(word)){
pageNo=wordPageNoHashMap.get(word); //note: page no only increase if found result
}
else{
pageNo=1;
wordPageNoHashMap.put(word, pageNo);
}
callToDB(word,pageNo);
}
});
public void resultFromDB(ServerResult result){
int pageNo=result.getPageNo();
String word=result.getWord();
List<String> textResult=result.getResult();
if(textResult!=null && textResult.size()>0){
pageNo++;
wordPageNoHashMap.put(word, pageNo);
//show data here
}
else{
//show err here
}
}
I putting pageNo++ at the result not at the time we call.
Am i designning it ok?
or
Can u do a better design?
Assuming my understanding is correct, for a search query I will retrieve a reasonable number of records from the DB (say 500) and store it in something like a PagedListHolder and set the per page data to whatever number you want(say 20).
Now I have two options, when the user clicks next I will simply call the nextPage() and retrieve the data set. (This might be applicable for infinite loading)
Or if the user is clicks on a particular page number (conventional pagination), I will pass on the page number to the setPage() method and retrieve the elements from that page.
I have used the PagedlistHolder example to make it easy for you to understand. You may use any similiar Class if available, or you can write one.
I think this achieves your objective of not hitting the DB for the same set of data.
Let me know if it helped.
I have a problem which is related to logic than a technology, here is a scenario, (I am using Spring + Hibernate)
I need to read some data from database to return back to page on every get request, but I thought some hack here that what if using some script someone reload page very frequently, this will cause that many calls to server, for this I thought to read data and put them in global variables or class variable, by doing so i end up writing very weird code many global variable and stupid way to give them initial value like for a variable user-status which is a byte type variable I have given -2 as initial value so that my inner logic can understand no value is set for this variable from database, below is my code
#Controller
/* #Secured("hasRole('ROLE_USERS')") */
#RequestMapping("member")
public class ApplyRoles {
#Autowired
private UserInformationForAccessApplication checkUserStatus;
// we will initialize variables to avoid auto-initialize by constructor
private byte userStatus = Constant.IntializationOfGlobalVariable.GLOBALINIT,
requesttype = Constant.IntializationOfGlobalVariable.GLOBALINIT,
access = Constant.IntializationOfGlobalVariable.GLOBALINIT;
Map<String, Object> accessnrole;
Map<String, String> country;
Map<String, String> roleArray;
#Autowired
StudentEnrollmentApplication enrollmentApplication;
#Autowired
SystemProperties systemProperties;
#Autowired
EmployeeEnrollmentApplicationResume employeeEnrollmentApplicationResume;
#Autowired
AccessEnrollmentProcessing accessEnrollmentProcessing;
private String role = Constant.IntializationOfGlobalVariable.ROLENOTSET,
fname, lname;
#RequestMapping(value = "/user", method = RequestMethod.GET)
public String checkingUserStatus(Model model, HttpSession session,
Authentication authentication) {
String sessionemail = "yashprit#gmail.com";// (String) session
// .getAttribute(Constant.SessionAttributes.LOGGEDINUSER);
// first check global value, if found set than don't fetch from database
if (userStatus == Constant.IntializationOfGlobalVariable.GLOBALINIT) {
// get user status from MySQL Database
userStatus = checkUserStatus.checkStatus(sessionemail).get(0);
if (!(userStatus == Constant.UserRoleApplicationStatus.NOTAPPLIED)) {
access = checkUserStatus.checkStatus(sessionemail).get(1);
model.addAttribute(Constant.SystemName.ACCESS, access);
}
}
if (!(userStatus >= Constant.UserRoleApplicationStatus.NOTAPPLIED || userStatus <= Constant.UserRoleApplicationStatus.REJECTED)) {
model.addAttribute("error", "User status is not avaible");
return "redirect:error/pagenotfound";
} else if (userStatus == Constant.UserRoleApplicationStatus.NOTAPPLIED) {
if (requesttype == Constant.IntializationOfGlobalVariable.GLOBALINIT) {
// get request type from MongoDB database
requesttype = checkUserStatus.getRequestType(sessionemail);
}
if (!(requesttype == Constant.RequestType.NORMALEBIT || requesttype == Constant.RequestType.INVITEBIT)) {
model.addAttribute("error",
"Facing Technichal Issue, Please try again");
return "redirect:error/pagenotfound";
}
if (requesttype == Constant.RequestType.INVITEBIT) {
if (!(Byte.parseByte((String) accessnrole
.get(Constant.SystemName.ACCESS)) == Constant.Access.USERBIT)) {
accessnrole = checkUserStatus
.getAccessAndRole(sessionemail);
}
if (accessnrole.get(Constant.SystemName.ACCESS).equals(
Constant.Database.ERRORMESSAGE)
|| accessnrole.get(Constant.SystemName.ROLE).equals(
Constant.Database.ERRORMESSAGE)) {
model.addAttribute("error",
"Facing Technichal Issue, Please try again");
return "redirect:error/pagenotfound";
}
model.addAttribute(Constant.SystemName.ACCESSNROLE, accessnrole);
model.addAttribute(Constant.SystemName.REQUESTTYPE, requesttype);
}
}
model.addAttribute(Constant.SystemName.USERSTATUS, userStatus);
return "member/user";
}
}
to avoid global variable i thought of suing cookies, because I don't want to call database on every page reload in same session, once its loaded for a session than I don't have to call to database.
Anything that can help to to redesign above part of code is much appreciated
thanks
There are really 2 things that you are considering, and correctly me if I'm wrong, but:
Caching on the server (in your Java application) to avoid doing a database lookup multiple times for the same data.
Avoid the client (browser) from sending multiple requests to the server.
The first can be resolved using caching which is available in spring uses annotations on any given method. The documentation is available here.
The second is a bit more tricky and I' leave it for now unless you discover a performance problem. It's again possible to do in Spring and takes advantage of the HTTP protocol and caching controls available in the HTTP header to inform the browser how long to cache responses.
What you are thinking about is called a "cache". It is a standard Computer Science way of doing things and they have been doing research on how to use caches for as long as there have been computers.
You might want to go do some reading on the subject. I found this one by Googling "cache tutorial java" http://javalandscape.blogspot.com/2009/01/cachingcaching-algorithms-and-caching.html
In simplest terms (a one item cache) what you want is to store some data object that you recently took some time to come up with. But you also have to have some sort of identifier so you can tell if the next request is asking for the same data. If it isn't, you have to do all the work over. If it is the same data, you just return it again.
So the algorithm works something like this in this simple case:
if (storedData != null && storedRequestInfo == userRequest.requestInfo) {
return storedData;
}
storedData = youCalculateTheRequestedData();
storedRequestInfo = userRequest.requestInfo;
return storedData;
Its not any real programming language, just something to show you how it works.
The requestInfo is whatever comes in with the request that you use to look up your database stuff. You save it in storedRequestInfo after any calculation.
This shows it as returning some data to the user, that's what is in storedData.
It's a simple, one-element cache.
(To expand on this, you can store the storedRequestInfo and storedData in the session and you end up with one of these stored for each user. You can also use a java Map and store a bunch of storedData. The problem is to decide how to limit your memory use. If you store too many of these for each user, you use up too much memory. So you limit how many each user can have either by size or by count. Then you have to decide which one to delete when it gets too big. In the simple case, you always delete, in essence, the stored one and store a new one.
I noticed your comment. ECache is just a big fancy Map in the terms I used above. I don't know if it's naturally session dependent but it can be made that way by adding the session id to the cache key.)
I have a Requirement to make an IMAP client as a Web application
I achieved the functionality of Sorting as:
//userFolder is an Object of IMAPFolder
Message[] messages = userFolder.getMessages();
Arrays.sort(messages, new Comparator<Message>()
{
public int compare(Message message1, Message message2)
{
int returnValue = 0;
try
{
if (sortCriteria == SORT_SENT_DATE)
{
returnValue = message1.getSentDate().compareTo(message2.getSentDate());
}
} catch (Exception e)
{
System.out.println(e.getMessage());
e.printStackTrace();
}
if (sortType == SORT_TYPE_DESCENDING)
{
returnValue = -returnValue;
}
return returnValue;
}
});
The code snippet is not complete , its just brief
SORT_SENT_DATE,SORT_TYPE_DESCENDING are my own constants.
Actually This solution is working fine, but it fails in logic for paging
Being a Web based application, i cant expect server to load all messages for every user and sort them
(We do have situations >1000 Simultaneous users with mail boxes having > 1000 messages each )
It also does not make sense for the web server to load all, sort them, return just a small part (say 1-20),
and on the next request, again load all sort them and return (21-40). Caching possible, but whts the gaurantee user would actually make a request ?
I heard there is a class called FetchProfile, can that help me here ? (I guess it would still load all messages but just the information thats required)
Is there any other way to achieve this ?
I need a solution that could also work in Search operation (searching with paging),
I have built an archietecture to create a SearchTerm but here too i would require paging.
for ref, i have asked this same Question at :
http://www.coderanch.com/t/461408/Other-JSE-JEE-APIs/java/it-possible-use-IMAP-paging
You would need a server with the SORT extension and even that may not be enough. Then you issue SORT on the specific mailbox and FETCH only those message numbers that fall into your view.
Update based on comments:
For servers where the SORT extension is not available the next best thing is to FETCH header field representing the sort key for all items (eg. FETCH 1:* BODY[HEADER.FIELDS(SUBJECT)] for subject or FETCH 1:* BODY[HEADER.FIELDS(DATA)] for sent date), then sort based on the key. You will get a list of sorted message number this way, which should be equivalent to what the SORT command would return.
If server side cache is allowed then the best way is to keep cache of envelopes (in the IMAP ENVELOPE sense) and then update it using the techniques described in RFC 4549. It's easy to sort and page given this cache.
There are two IMAP APIs on Java - the official JavaMail API and Risoretto. Risoretto is more low-level and should allow to implement anything described above, JavaMail may be able to do so as well, but I don't have much experience with it.