I'm trying to figure out how can i store Long info about a Discord user in an array.
Tommy on discord type: ;create
then
Mike on discord type : ;create
public class Create extends ListenerAdapter {
public void onMessageReceived(MessageReceivedEvent event) {
if (!event.isFromGuild()) return;
String[] messageSent = event.getMessage().getContentRaw().split(" "); //for further code
String name = event.getMember().getUser().getName();
long idLong = event.getMember().getUser().getIdLong();
String idString = event.getMember().getUser().getId(); //not for Stackoverflow question
long base = 1L; //everyone start with 1 (L is because we are using Long version of int value)
if (messageSent[0].equalsIgnoreCase(";Create")) {
ArrayList<Long> dcbase = new ArrayList<>(); //Long is used to store getIdLong value
dcbase.add(idLong); //
dcbase.add(base); // 1L is default value
event.getChannel().sendMessage("Here is the array "+ dcbase).queue();}}
Now the problem is if i want my ArrayList to be for many user I would need an ArrayList of ArrayList. Arraylist<ArrayList<Long>>
But to search through them I would like to do search using the IdLong value.
I tried to replace dcbase as idLong but its already defined.
Is there any way i can do that?
Because what i want to do next is have a method that goes to the idLong of Tommy and pull out the [1] of the Tommy ArrayList.
I plan to store the info to a file that way and will have longer Arrays:
177877878787 1 0 0 //Tommy IdLong, base Long, stuff i'll add, stuff i'll add
121244777778 1 //Mike IdLong, base Long
//New line for new member.
Since I don't know on which line the required IdLong will be stored in the file, i need a reference to search it.
I am self-thaught, I hope I am clear enough.
You want a Map. The key is the user id (as a long) and the value is your number:
private final Map<Long, Long> map = new HashMap<>();
#Override
public void onMessageReceived(MessageReceivedEvent event) {
...
map.put(id, base);
...
}
You can then simply access the value by using map.get(id). Instead of using lists, I would recommend using proper objects with defined fields for whatever data you want to store.
For the case of persistence, instead of just writing your data to some file, use a relational database like PostgreSQL or SQLite. This way you can easily update them at runtime without having to read/write the entire map every time you want to access it.
A very helpful thing to use for this is the Java Persistence API (JPA).
#Entity
#Table
public class UserInfo {
#Id
private long id;
private long base;
public UserInfo(long id, long base) {
this.id = id;
this.base = base;
}
public void setId(long id) {
this.id = id;
}
public long getId() { return this.id; }
public void setBase(long base) {
this.base = base;
}
public long getBase() { return this.base; }
}
Related
i need help with the getQueryResult() function in hyperledger fabric.
I know that i can use it this way:
String queryHash;
QueryResultsIterator<KeyValue> results = stub.getQueryResult("{\"selector\":{\"hash\":\"" + queryHash + "\"}}");
to run a query that searches each asset for those with the hash parameter set to the queryHash string.
At the moment, however, I have 3 different type of assets and I would like to understand how to set the search on only one of them.
Let me explain. Let us suppose that I have 3 different types of assets. For example, an asset called a car, with its attributes (id, name, model, etc.), an asset called a truck, also with its attributes, and another called an aeroplane, also with its attributes.
Let's say I want to make a query that searches for all the cars by make, but without including trucks and planes.
How can I indicate in the query that I am referring only to that type of asset?
Thanks
Why querying on a single object?
I don't know about your chaincode models, but supposing you have some kind of ID, it would be something like:
String id;
String queryHash;
// ...
QueryResultsIterator<KeyValue> results = stub.getQueryResult("{\"selector\":{\"id\":\"" + id + "\"" + ", " + "\"hash\":\"" + queryHash + "\" }}");
If you have access to its CouchDB key, you can simply get the object and check the hash when deserialized:
String couchdbKey;
// ...
byte[] ba = stub.getState(couchdbKey);
// Deserialize ba and check hash
EDIT
I think you should refactor your models. I usually develop chaincodes in Go, but in Java it could be something like (check the code, it is written on the fly):
public abstract class Asset {
#Property()
private String doctype;
#Property()
private String id;
protected Asset(String doctype) {
this.doctype = doctype;
}
public String getDoctype() {
return doctype;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
// ...
}
#DataType()
public class Car extends Asset {
public static final String DOCTYPE = "car";
public Car() {
super(Car.DOCTYPE);
}
// ...
}
Then, you can query on doctype for each model. And you should preferably create an index for doctype.
Q: I have a Bank class containing multiple loan accounts (LoanAccount class). I've create a LoanAccountService that have the CRUD functionalities. My concerns are about how I implemented the update functionality.
Bank
public class Bank {
private List<LoanAccount> loanAccounts;
}
Loan account
public class LoanAccount {
private String id;
private Integer numberOfInstallments;
private LoanAccountType type;
private Date creationDate;
private BigDecimal loanAmount;
}
Service
public class LoanAccountService{
private Bank bank;
public LoanAccountService(Bank bank) {
this.bank = bank;
}
public LoanAccount update(LoanAccount loanAccount) {
Optional<LoanAccount> account = bank.getLoanAccounts()
.stream()
.filter(la -> la.getId().equals(loanAccount.getId()))
.findAny();
if (account.isPresent()) {
account.get().setCreationDate(loanAccount.getCreationDate());
account.get().setLoanAmount(loanAccount.getLoanAmount());
account.get().setNumberOfInstallments(loanAccount.getNumberOfInstallments());
account.get().setType(loanAccount.getType());
} else {
throw new IllegalArgumentException("The object does not exist.");
}
return loanAccount;
}
}
When the method update is called with a LoanAccount containing an id that already exists in loanAccounts list, I want to update the existing object with the object loanAccount given as parameter.
Above is my implementation, but I feel like there should be better ways to do it.
Use Builder for getter and setter
public class LoanAccount {
private String id;
private Integer numberOfInstallments;
// add other properties
public String getId() {
return id;
}
public LoanAccount setId(String id) {
this.id = id;
return this;
}
public Integer getNumberOfInstallments() {
return numberOfInstallments;
}
public LoanAccount setNumberOfInstallments(Integer numberOfInstallments) {
this.numberOfInstallments = numberOfInstallments;
return this;
}
Use this one for update method
public LoanAccount update(LoanAccount loanAccount) {
return bank.getLoanAccounts()
.stream()
.filter(la -> la.getId().equals(loanAccount.getId()))
.findFirst().orElseThrow(IllegalArgumentException::new)
.setCreationDate(loanAccount.getCreationDate())
.setLoanAmount(loanAccount.getLoanAmount())
.setNumberOfInstallments(loanAccount.getNumberOfInstallments())
.setType(loanAccount.getType());
}
You could use a HashMap where the TKey is the type of your LoanAccount.id.
Then call loanAccounts.put(id, object)
This will update the object if there is already an Id and add a new object if not.
This is a cheap, dirty way. Another way of doing it would be to make your LoanAccount class implement Comparable and in the compareTo() method make a id based comparation.
Do the same thing overriding your equals() and you should be ready to go.
#Override
public boolean equals(object obj) {
if (obj == null) return false;
return ((LoanAccount)obj).getId() == this.getId();
}
something like that.
(code wrote by memory, can have errors and lacks validations like the data type)
What kind of persistence layer do you use?
why do you need to loop through all of the bank accounts?
Did you fetch all the accounts from the repository and loop over the service layer? If so why?
why not you fetch the corresponding single record from repository and update?
Why not you use to find and update the records instead of using the above points?
These questions may give you an idea. If you answering it !!!
If not let we discuss deeper
I have a current state where an enum MyType represent Type table with columns as:
ID
Name
And it's used to identify type using ID parameter with byId method:
public enum MyType {
FIRST_TYPE("First Type", 10),
SECOND_TYPE("Second Type", 20);
public static class Holder {
static Map<Integer, MyType > idMap = new HashMap<>();
private Holder() { }
}
private MyType(String name, Integer id) {
this.name = name;
this.id = id;
Holder.idMap.put(id, this);
}
public String getName() {
return name;
}
public static MyType byId(Integer id) {
return Holder.idMap.get(id);
}
My new requirement is to support also values exists in Type table, I found answers for dynamic enum, but accept answer is not to do it
No. Enums are always fixed at compile-time. The only way you could do this would be to dyamically generate the relevant bytecode.
What will be a better solution for finding also values (mainly IDs) from database (for example ID 30)
select ID from TYPE
Can I extends existing state instead of change it? can I add extra IDS from database using method?
EDIT
Even if I update as #StefanFischer suggested an interface which populate map with enum class and new database class, I still expect in code an enum return by byId method,
public interface MyType {
public static class Holder {
static Map<Integer, MyType> idMap = new HashMap<>();
private Holder() { }
}
public default void add(MyType myType, Integer id) {
Holder.idMap.put(id, myType);
}
public static MyType byId(Integer id) {
return Holder.idMap.get(id);
}
}
A distinct non-answer: you are trying to force yourself down the wrong rabbit hole.
The whole point of Enums are to give you certain advantages at compile time. At runtime, it really wouldn't matter to the JVM if you have a class with some final static Whatever fields, or an Enum with different constants. Or if you use an EnumSet versus an ordinary Set.
You use enums because they allow you to write down your source code in more elegant ways.
Therefore the approach of generating enums at runtime doesn't make sense.
The idea of enums is that you write source code using them. But when your enums are generated for you, how exactly would you write source code exploiting them?! As mentioned already, enum classes are final by default. You can't extend or enhance them separately. Whatever you would want to have, it needs to be generated for you. Which again raises the question: how would you exploit something at compile time, that gets generated at runtime?
Therefore, from a conceptual point of view, the approach outlined in the other answer (to use a Map) is a much better design point than trying to turn enums into something that they aren't meant to be.
If I understand it correctly the requirements are:
having a MyType.byId(Integer id) method that delivers some predefined values
it should be also extended dynamically from a Table Type from the database
So a enum can not be extended dynamically, but we could switch to a class.
So staying close to your code one could write something like:
import java.util.HashMap;
import java.util.Map;
public class MyType {
static Map<Integer, MyType> idMap = new HashMap<>();
static {
idMap.put(10, new MyType("First Type"));
idMap.put(20, new MyType("Second Type"));
}
private final String name;
private MyType(String name) {
this.name = name;
}
public String getName() {
return name;
}
public static MyType byId(Integer id) {
return idMap.get(id);
}
public static void addType(String name, Integer id) {
MyType lookup = byId(id);
if(lookup != null) {
if(!lookup.getName().equals(name)) {
System.out.println("conflicting redefinition for id " + id + ": '" + name + "' vs '" + lookup.name + "'");
//handle...
}
}
idMap.put(id, new MyType(name));
}
}
Test Data
Let's assume we have the following in the database:
stephan=# select * from Type;
id | name
----+-------------
30 | Third Type
10 | First Type
20 | Second Type
(3 rows)
So in the database we have the predefined types with id=10 and id=20 but also a type with id=30 that is not known per default to the application. But we can populate the types from the database.
Test Case
public static void main(String[] args) {
try {
Connection connection = createConnection();
try (connection) {
populateTypes(connection);
}
MyType type;
type = MyType.byId(10);
System.out.println(type.getName());
type = MyType.byId(20);
System.out.println(type.getName());
type = MyType.byId(30);
System.out.println(type.getName());
} catch (Exception e) {
e.printStackTrace();
}
}
JDBC Example
It doesn't matter what actual database technology is used to retrieve the values. Here an example for JDBC:
private static void populateTypes(Connection connection)
throws SQLException {
String sql = "SELECT * FROM type";
try (Statement st = connection.createStatement()) {
try (ResultSet rs = st.executeQuery(sql)) {
while (rs.next()) {
int id = rs.getInt("id");
String name = rs.getString("name");
MyType.addType(name, id);
}
}
}
}
Demo Output
First Type
Second Type
Third Type
Is that what you are looking for?
enum represents a group of constants (unchangeable variables, like final variables). you can not define it on runtime.
I'm implementing the "auto-increment" id using strategy described here:
http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
Basically the value of the seqId field is set by calling an utility function that updates the counter on an auxiliary collection and returns the incremented value. Sounds great.
My issue is in mapping this to be used with Morphia. The tutorial suggests performing the insert (such as in the shell) like so:
db.users.insert(
{
seqId: getNextSequence("userid"),
name: "Sarah C."
}
I'm basically looking to do something like setting the POJO seqId field to something that Morphia will translate into an insert like the one above when I invoke save().
My POJO looks like this:
#Entity
public class User {
#Id
private Long id;
// THIS IS THE FIELD I WANT TO AUTO-INCREMENT
private Long seqId;
private String name;
...
}
The question is: How to make Morphia set the value of a field as the value returned by a function call?
I looked into using the #PrePresist annotation to perform this function call and getting the value, then setting it in the +_id field. That has several drawbacks such as making multiple calls to MongoDB instead of just one, and also the fact that my model objects don't have a reference to the datastore and I'd rather not mix up the concerns.
Is this possible? Any suggestions?
I'm on MongoDB 2.6.6 using the latest Java drivers.
Thanks!
PS: I'm aware that auto-increment is not recommended in large environments. I need it anyways for this specific scenario.
I'll describe the solution that's working for us quite well. Note that this supports auto increments on the class level and a subset of it — so you can count users or admin-users (user with an admin enum or whatever).
This contains the current value for each auto increment field, it's basically a reference:
#Entity(noClassnameStored = true)
public class AutoIncrementEntity {
#Id
protected String key;
protected Long value = 1L;
protected AutoIncrementEntity() {
super();
}
/**
* Set the key name — class or class with some other attribute(s).
*/
public AutoIncrementEntity(final String key) {
this.key = key;
}
/**
* Set the key name and initialize the value so it won't start at 1.
*/
public AutoIncrementEntity(final String key, final Long startValue) {
this(key);
value = startValue;
}
public Long getValue() {
return value;
}
}
In your persistence service, you could use the following to set / create the auto increment automatically:
public <E extends BaseEntity> ObjectId persist(E entity) {
// If it's a user and doesn't yet have an ID, set one; start counting from 1000.
if ((entity instanceof UserEntity) && (((UserEntity) entity).getUserId() == null)) {
((UserEntity) entity).setUserId(
generateAutoIncrement(entity.getClass().getName(), 1000L));
}
// Additionally, set an ID within each user group; start counting from 1.
if ((entity instanceof UserEntity) && (((UserEntity) entity).getRoleId() == null)) {
((UserEntity) entity).setRoleId(
generateAutoIncrement(entity.getClass().getName() + "-" + entity.getRole(), 1L));
}
mongoDataStore.save(entity);
return entity.getId();
}
/**
* Return a unique numeric value for the given key.
* The minimum value, set to 1 if nothing specific is required.
*/
protected long generateAutoIncrement(final String key, final long minimumValue){
// Get the given key from the auto increment entity and try to increment it.
final Query<AutoIncrementEntity> query = mongoDataStore.find(
AutoIncrementEntity.class).field("_id").equal(key);
final UpdateOperations<AutoIncrementEntity> update = mongoDataStore
.createUpdateOperations(AutoIncrementEntity.class).inc("value");
AutoIncrementEntity autoIncrement = mongoDataStore.findAndModify(query, update);
// If none is found, we need to create one for the given key.
if (autoIncrement == null) {
autoIncrement = new AutoIncrementEntity(key, minimumValue);
mongoDataStore.save(autoIncrement);
}
return autoIncrement.getValue();
}
And finally your entity:
#Entity(value = "user", noClassnameStored = true)
public class UserEntity extends BaseEntity {
public static enum Role {
ADMIN, USER,
}
private Role role;
#Indexed(unique = true)
private Long userId;
private Long roleId;
// Role setter and getter
public Long getUserId() {
return userId;
}
public void setUserId(Long userId) {
this.userId = userId;
}
public Long getRoleId() {
return roleId;
}
public void setRoleId(Long roleId) {
this.roleId = roleId;
}
}
There's nothing specific going on in the entity. All the logic is handled by the persistence service. I'm not using the #PrePersist, because you'd then need to put the persistence service into the entity, which doesn't sound like a good idea.
I'm evaluating Spring Data's support for Couchbase, and have have run across the following issue. Consider the following pseudo-code example where I've two POJO classes, and a repository defined and instantiated for each:
public class Foo
{
#Id
private String _id;
#Version
private Long _rev;
// .. Other data and members.
}
public class Bar
{
#Id
private String _id;
#Version
private Long _rev;
// .. Other data and members.
}
//
// Repositories
//
#Repository
public interface FooRepository extends CrudRepository<Foo, String> {
}
#Repository
public interface BarRepository extends CrudRepository<Bar, String> {
}
Both repositories are utilizing the same Couchbase bucket. Next:
// Create a new Foo object and save it.
Foo f = new Foo( "id_1" );
fooRepository.save( f );
// Now, try fetching a Bar object using the ID of a Foo object (?)
Bar b = barRepository.findOne( "id_1" );
This results in a Bar object being returned, but not properly initialized - no exceptions are raised. The question, is why isn't an error indicated in this scenario? It seems like not much of a stretch to raise an exception when the requested type doesn't match the persisted type. Am I missing something?
FWIW, When I look at the raw documents in Couchbase via the admin console, I observe that each contains a "_class" property with presumably could be used to identify classes used to represent the data, and detect such mis-matches.
The problem here is that a document (JSON serialized entity) is stored associated to the id (#Id field) on same bucket as other entities, this generates a kind of ambiguity, saving Entity1 with id 1, will overwrite Entity2 with id 1.
An application side solution would be to store entities with distinct keys, in the case of Entity1 with id 1 something like: 1-Entity1 as key.
Maybe you could solve the problem with an approach similar to this one:
public Entity1 {
private static final String ENTITY_NAME = "ENTITY1";
#Id
private String key;
public void setKey(String key) {
this.key = key;
}
public String getKey(String key) {
return this.key;
}
public void setId(String id) {
this.key = ENTITY_NAME + ":" + id;
}
public String getId() {
if (null == this.key)
return null;
return this.key.substring(ENTITY_NAME.length()+1);
}
}