I created a google app engine client using eclipse and the android demo google hands out. I Created the back end and a few models. When I add an entities from android to my database on GAE it orders it by date not by newest created first. The key it just the current date and tie on android. Im not sure how to work with the back end, as google created it for me in my project. Is there a fast change I can make to it so instead or it ordering it by data when I add an item it will just keep the newest listings on top?
Edited question, this is my endpoint class Google generated for me. How can I modify it to receive the newest added entities first?
#Api(name = "quotesendpoint", namespace = #ApiNamespace(ownerDomain = "projectquotes.com" ownerName = "projectquotes.com", packagePath = ""))
public class quotesEndpoint {
/**
* This method lists all the entities inserted in datastore.
* It uses HTTP GET method and paging support.
*
* #return A CollectionResponse class containing the list of all entities
* persisted and a cursor to the next page.
*/
#SuppressWarnings({ "unchecked", "unused" })
#ApiMethod(name = "listquotes")
public CollectionResponse<quotes> listquotes(
#Nullable #Named("cursor") String cursorString,
#Nullable #Named("limit") Integer limit) {
EntityManager mgr = null;
Cursor cursor = null;
List<quotes> execute = null;
try {
mgr = getEntityManager();
Query query = mgr.createQuery("select from quotes as quotes");
if (cursorString != null && cursorString != "") {
cursor = Cursor.fromWebSafeString(cursorString);
query.setHint(JPACursorHelper.CURSOR_HINT, cursor);
}
if (limit != null) {
query.setFirstResult(0);
query.setMaxResults(limit);
}
execute = (List<quotes>) query.getResultList();
cursor = JPACursorHelper.getCursor(execute);
if (cursor != null)
cursorString = cursor.toWebSafeString();
// Tight loop for fetching all entities from datastore and accomodate
// for lazy fetch.
for (quotes obj : execute)
;
} finally {
mgr.close();
}
return CollectionResponse.<quotes> builder().setItems(execute)
.setNextPageToken(cursorString).build();
The order you see in datastore viewer in GAE is not significant as it is just a display of the current data in your datastore and shown in the increasing order of entity id(if using auto id). This could coincidentally also have an increasing order of date. You cannot modify this display pattern.
What matters is the order seen by your queries and this is determined by indexes. So if you need to get your entities in the descending order of date, then if your date entry is left as indexed, GAE will be automatically having an index for date. You just need to query your entities by specifying a descending sort order on the date property.
EDIT:
Based on the code added, below modifications should be done to query the entities in descending order of date.
1, Add a new date property in your entity:
private Date entrydate;
2, While creating an entity, add the current date to this property
yourentity.setEntryDate(new Date())
3, While querying, set ordering based on descending order of date
query.setOrdering("entrydate desc");
Related
Im using spring PagingAndSortingRepository to do pagination of database entries.
During processing i need to delete some entries..
when i call the repository to delete, the entry is deleted, after that the problem is with the next pageable. i'm not getting the size number of elements from the next Pageable (pageRequest.next();).
Is there any way to iterate with pagination and perform in parallel crud operation.
Part of the code
while (!onePage.isEmpty()) {
while (pageIterator.hasNext()) {
Object nextElement = pageIterator.next();
if (!falseCondition) {
log.info("sending message with Id {}", nextElement.getId());
repository.deleteById(nextElement.getId());
} else {
log.info("Lost connection");
return;
}
}
pageRequest = pageRequest.next();
onePage = repository.findAll(pageRequest);
pageIterator = onePage.iterator();
}
Many thanks.
Like #ruba pointed out in the example, it is not a hibernate issue. Even if you using jdbc API directly you will have to handle the situation. I can propose you a solution
You can implement your custom spring-data-jpa repository method where the service pass the pageRequest but you translate it to offset and limit. So instead of calling pageRequest.next() you do the following which takes into account of the items deleted.
long nextPageNumber = pageRequest.getPageNumber() + 1;
long nextOffset = nextPageNumber * pageRequest.getPageSize()
- itemsDeletedInCurrentPage;
long limit = pageRequest.getPageSize();
List<Item> itemsInNextPage = em.createQuery(query)
.setFirstResult(offset)
.setMaxResults(limit)
.getResultList();
Im trying to update multiple records via an ATG class extending GenericService.
However im running against a roadblock.
How do I do a multiple insert query where i can keep adding all the items / rows into the cached object and then do a single command sync with the table using item.add() ?
Sample code
the first part is to clear out the rows in the table before insertion happens (mighty helpful if anyone knows of a way to clear all rows in a table without having to loop through and delete one by one).
MutableRepository repo = (MutableRepository) feedRepository;
RepositoryView view = null;
try{
view = getFeedRepository().getView(getFeedRepositoryFeedDataDescriptorName());
RepositoryItem[] items = null;
if(view != null){
QueryBuilder qb = view.getQueryBuilder();
Query getFeedsQuery = qb.createUnconstrainedQuery();
items = view.executeQuery(getFeedsQuery);
}
if(items != null && items.length>0){
// remove all items in the repository
for(RepositoryItem item :items){
repo.removeItem(item.getRepositoryId(), getFeedRepositoryFeedDataDescriptorName());
}
}
for(RSSFeedObject rfo : feedEntries){
MutableRepositoryItem feedItem = repo.createItem(getFeedRepositoryFeedDataDescriptorName());
feedItem.setPropertyValue(DB_COL_AUTHOR, rfo.getAuthor());
feedItem.setPropertyValue(DB_COL_FEEDURL, rfo.getFeedUrl());
feedItem.setPropertyValue(DB_COL_TITLE, rfo.getTitle());
feedItem.setPropertyValue(DB_COL_FEEDURL, rfo.getPublishedDate());
RepositoryItem item = repo.addItem(feedItem) ;
}
The way I interpret your question is that you want to add multiple repository items to your repository but you want to do it fairly efficiently at a database level. I suggest you make use of the Java Transaction API as recommended in the ATG documentation, like so:
TransactionManager tm = ...
TransactionDemarcation td = new TransactionDemarcation ();
try {
try {
td.begin (tm);
... do repository item work ...
}
finally {
td.end ();
}
}
catch (TransactionDemarcationException exc) {
... handle the exception ...
}
Assuming you are using a SQL repository in your example, the SQL INSERT statements will be issued after each call to addItem but will not be committed until/if the transaction completes successfully.
ATG does not provide support for deleting multiple records in a single SQL statement. You can use transactions, as #chrisjleu suggests, but there is no way to do the equivalent of a DELETE WHERE ID IN {"1", "2", ...}. Your code looks correct.
It is possible to invoke stored procedures or execute custom SQL through an ATG Repository, but that isn't generally recommended for portability/maintenance reasons. If you did that, you would also need to flush the appropriate portions of the item/query caches manually.
I have this method in my RPC service:
#Override
public Entrata[] getEntrate(int from, int to) {
List<Entrata> data = entrateDao.list();
return data.toArray(new Entrata[0]);
}
As you can see, I am not using the two parameters, which, in a SQL world, I would use as LIMIT and OFFSET.
It's not completely clear what I have to do now, I started reading this:
http://code.google.com/p/objectify-appengine/wiki/IntroductionToObjectify#Cursors
I think I have to do a query.startCursor(<my_"from"_parameter>)
Then iterate for "TO" times, the page size.
All right? Can you help me with some snippets? :)
From docs: Cursors let you take a "checkpoint" in a query result set, store the checkpoint elsewhere, and then resume from where you left off late
As you need just limit/offset, you have to use limit() and offset() method of Objectify Query. Like:
ob.query(Entrata.class).limit(to - from).offset(from)
Or, when you have cursor:
String cursor = // get it from request
Query<Entrata> query = ob.query(Entrata.class);
Query q = query.startCursor(Cursor.fromWebSafeString(cursor));
q.limit(x);
QueryResultIterator<Entrate> iterator = query.iterator()
List<Entrate> data = // fetch data
String newCursor = iterrator.getStartCursor().toWebSafeString()
return new EntrataListWithCursor(data, cursor);
I just want make sure you don't have any errors in your code since you can copy and past the Igor Artamonov code.
Here is a cleaner code from Objectify Wiki with less errors and some documentation:
// create the query and set the limit to 1000
Query<Car> query = ofy().load().type(Car.class).limit(1000);
// Here you get the cursor (if exists) from the request
// For the first request, i-e the first page, this parameter(cursor) will be null
String cursorStr = request.getParameter("cursor");
// Here you check if cursor is not null and not empty
// If so, we start our query from the last check point
if (cursorStr != null && !cursorStr.isEmpty())
query = query.startAt(Cursor.fromWebSafeString(cursorStr));
// We need this variable to know when we have been loaded all the entries
boolean remaining = false;
QueryResultIterator<Car> iterator = query.iterator();
while (iterator.hasNext()) {
Car car = iterator.next();
... // your code here
// We have found entries, so we set this variable to true.
// That means, we have probably another page to fetch
remaining = true;
}
// If we have found entries, we send the last check point
if (remaining) {
// we take the last check point by calling "toWebSafeString()" from the iterator's cursor
Cursor cursor = iterator.getCursor();
Queue queue = QueueFactory.getDefaultQueue();
queue.add(url("/pathToThisServlet").param("cursor", cursor.toWebSafeString()));
}
I have an application that uses hibernate. At one part I am trying to retrieve documents. Each document has an account number. The model looks something like this:
private Long _id;
private String _acct;
private String _message;
private String _document;
private String _doctype;
private Date _review_date;
I then retrieve the documents with a document service. A portion of the code is here:
public List<Doc_table> getDocuments(int hours_, int dummyFlag_,List<String> accts) {
List<Doc_table> documents = new ArrayList<Doc_table>();
Session session = null;
Criteria criteria = null;
try {
// Lets create a previous Date by subtracting the number of
// subtractHours_ passed.
session = HibernateUtil.getSession();
session.beginTransaction();
if (accts == null) {
Calendar cutoffTime = Calendar.getInstance();
cutoffTime.add(Calendar.HOUR_OF_DAY, hours_);
criteria = session.createCriteria(Doc_table.class).add(
Restrictions.gt("dbcreate_date", cutoffTime.getTime()))
.add(Restrictions.eq("dummyflag", dummyFlag_));
} else
{ criteria = session.createCriteria(Doc_table.class).add(Restrictions.in("acct", accts));
}
documents = criteria.list();
for (int x = 0; x < documents.size(); x++) {
Doc_table document = documents.get(x);
......... more stuff here
}
This works great if I'm retrieving a small number of documents. But when the document size is large I get a heap space error, probably because the documents take up a lot of space and when you retrieve several thousand of them, bad things happen.
All I really want to do is retrieve each document that fits my criteria, grab the account number and return a list of account numbers (a far smaller object than a list of objects). If this were jdbc, I would know exactly what to do.
But in this case I'm stumped. I guess I'm looking for a way where I can bring just get the account numbers of the Doc_table object back.
Or alternatively, some way where I can retrieve documents one at a time from the database using hibernate that fit my criteria (instead of bringing back the whole List of objects which uses too much memory).
There are several ways to deal with the problem:
loading the docs in batches of an smaller size
(The way you noticed) not to query for the Document, but only for the account numbers:
List accts = session.createQuery("SELECT d._acct FROM Doc d WHERE ...");
or
List<String> accts = session.createCriteria(Doc.class).
setProjection(Projections.property("_acct")).
list();
When there is a special field in you Document class that contains the huge amount Document byte data, then you could map this special field as a Lazy loaded field.
Create a second entity class (read only) that contains only the fields that you need and map it to the same table
Instead of fetching all documents i.e, all records at once, try to limit the rows being fetched. Also, deploy a strategy where in you can store documents temporarily as flat files and fetch them later or delete after usage. Though its a bit long process,its efficient way of handling and delivering documents from database.
I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5.
My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is:
Get users who have activity in the past month
Create the relevant billing entries for each user
Get the set of billing entries that we've just created
Create a Monthly Bill based on these entries
Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated.
If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing.
My code looks like the following:
for (User user : usersWithActivities) {
createBillingEntriesForUser(user.getId());
userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId());
createXMLBillForUser(user.getId(), userBillingEntries);
}
The methods called look like the following:
#Transactional
public void createBillingEntriesForUser(Long id) {
UserManager userManager = ManagerFactory.getUserManager();
User user = userManager.getUser(id);
List<AccountEvent> events = getLastMonthsAccountEventsForUser(id);
BillingEntry entry = new BillingEntry();
if (null != events) {
for (AccountEvent event : events) {
if (event.getEventType().equals(EventType.ENABLE)) {
Calendar cal = Calendar.getInstance();
Date eventDate = event.getTimestamp();
cal.setTime(eventDate);
double startDate = cal.get(Calendar.DATE);
double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH);
double numberOfDaysInUse = numOfDaysInMonth - startDate;
double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth;
BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST);
amount.scale();
entry.setAmount(amount);
entry.setUser(user);
entry.setTimestamp(eventDate);
userManager.saveOrUpdate(entry);
}
}
}
}
#Transactional
public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) {
if (log.isDebugEnabled())
log.debug("Getting all the billing entries for last month for user with ID " + id);
//String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user";
String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth";
//This parameter will be the start of the last month ie. start of billing cycle
SearchParameter firstOfLastMonth = new SearchParameter();
firstOfLastMonth.setTemporalType(TemporalType.DATE);
//this parameter holds the start of the CURRENT month - ie. end of billing cycle
SearchParameter firstOfCurrentMonth = new SearchParameter();
firstOfCurrentMonth.setTemporalType(TemporalType.DATE);
Query query = super.entityManager.createQuery(queryString);
query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth());
query.setParameter("firstOfLastMonth", getFirstOfLastMonth());
query.setParameter("id", id);
List<BillingEntry> entries = query.getResultList();
return entries;
}
public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) {
BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager();
UserManager userManager = ManagerFactory.getUserManager();
MonthlyBill mb = new MonthlyBill();
User user = userManager.getUser(id);
mb.setUser(user);
mb.setTimestamp(new Date());
Set<BillingEntry> entries = new HashSet<BillingEntry>();
entries.addAll(billingEntries);
String xml = createXmlForMonthlyBill(user, entries);
mb.setXmlBill(xml);
mb.setBillingEntries(entries);
MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb);
return bill;
}
Help with this issue would be greatly appreciated as its been wracking my brain for weeks now!
Thanks in advance,
Gearoid.
Is your top method also transactional ? If yes most of the time i've encountered that kind of problem, it was a flush that was not done at the right time by hibernate.
Try to add a call to session.flush() at the beginning of the getLastMonthsBillingEntriesForUser method, see if it address your problem.
Call session.flush() AND session.close() before getLastMonthsBillingEntriesForUser gets called.
Please correct my assumptions if they are not correct...
As far as I can tell, the relationship between entry and user is a many to one.
So why is your query doing a "one to many" type join? You should rather make your query:
select be from BillingEntry as be where be.user=:user and be.timestamp >= :firstOfLastMonth and be.timestamp < :firstOfCurrentMonth
And then pass in the User object, not the user id. This query will be a little lighter in that it will not have to fetch the details for the user. i.e. not have to do a select on user.
Unfortunately this is probably not causing your problem, but it's worth fixing nevertheless.
Move the declaration of BillingEntry entry = new BillingEntry(); to within the for loop. That code looks like it's updating one entry over and over again.
I'm guessing here, but what you've coded goes against what I think I know about java persistence and hibernate.
Are you certain that those entries are being persisted properly? In my mind, what is happening is that a new BillingEntry is being created, it is then persisted. At this point the next iteration of the loop simply changes the values of an entry and calls merge. It doesn't look like you're doing anything to create a new BillingEntry after the first time, thus no new id's are generated which is why you can't retrieve them later.
That being said, I'm not convinced the timing of the flush isn't a culprit here either, so I'll wait with bated breathe for the downvotes.