I am developing a web application using Spring Boot and Vaadin Framework which suppose to have a feature of processing Microsoft Excel documents, storing data from spreadsheets in custom Java objects, and finally persisting them to database. I am using the Excel Uploader add-on from Vaadin's directory which handles this kind of stuff.
As it is shown in the official examples to this add-on, it perfectly works for populating Vaadin Grid component with data from a spreadsheet and reads it properly with no issues. However, I have problems when I try to store this data in a custom object and call any method of this data type, due to the ClassCastException. This exception is thrown because the program tries to cast objects of the same data types. The thing is that whenever I want to add a SucceededListener using a lambda, the data gathered from an excel document is referenced as a collection of the generic Object type, hence it is needed to cast it to a custom object in order to call specific methods. However, when the program runs and I try to upload an excel document via UI this exception is thrown because that collection of generic Objects becomes a collection of custom data types at some moment.
This is the method where the Excel Uploader is configured:
public void configureExcelUploader() {
ExcelUploader<Person> excelUploader = new ExcelUploader<>(Person.class);
excelUploader.addSuccededListener((event, items)) -> {
List<Person> persons = (List<Person>) items;
persons.callCustomMethod();
});
Upload uploadButton = new Upload("Upload");
uploadButton.setReceiver(excelUploader);
uploadButton.addSucceededListener(excelUploader);
}
I probably miss something or simply do not how to do this kind of operation properly. Is there anyone who knows a proper way of storing results in a custom object in this particular case?
I solved this problem by removing DevTools from maven dependencies and everything worked perfectly.
Related
We have upgraded the Spring Data Mongo recently and since then some read and update queries are failing with the following error,
org.springframework.data.mapping.MappingException: Expected to read Document Document{} into type class java.lang.Object but didn't find a PersistentEntity for the latter!
While debugging, found that the type is Object and it doesn't have _class then this error is thrown.
It was working fine before 3.2.
When checked in the 3.1's source code, the read method in class MappingMongoConverter has a logic to cast the bson to an Object type if the ClassTypeInfomartion is Object.
In a few of our object models, we store additional information about the document as an Object.
Is it possible to configure in Spring to convert the bson to Object instead of failing?
Thanks in advance.
Spring Data Mongo can take the type information from _class, but if the application cannot rely on that field, it needs to tell the driver what type the data it expects to get.
My current project uses this for that purpose:
public interface BusinessesRepository extends MongoRepository<SomeApplicationType, String>
(Spring Data will autogenerate a subclass with the proper class selection logic for such an interface.)
Slightly lower-level you have MongoTemplate with its find(Query, Class<T>) function, which accepts something like SomeApplicationType.class as the second parameter and uses that information to instantiate an object to populate the fields with.
I don't know what function your code calls to retrieve and deserialize your data, so I can't be more specific.
I'm trying to get all the adts of a website knowing it's groupid. However, it seems like the adts are mixed up with all the other templates of the site in the DDMTemplate table in the Database.
DDMTemplates are a general portal concept to manage templates for different types of portal assets (not only ADTs). DDMTemplateLocalService is a service to use to also list the ADTs for a certain asset type. You need to first fetch your ClassNameId for the desired asset type to render. For example:
com.liferay.portal.kernel.service.ClassNameLocalService.getClassNameId(AssetEntry.class)
for AssetPublisher entries (or BlogsEntry and so on - for all other types of interest).
Having this id, you can query the ADTs of a site (groupId) using:
com.liferay.dynamic.data.mapping.service.DDMTemplateLocalService.getTemplates(long groupId, long classNameId)
Instead of using the *LocalServiceUtil static functions, you could also inject the service using the #Reference annotation. For example #Reference private DDMTemplateLocalService ddmTemplateService;.
Does anyone know how to use data from Oracle database in Java Netbeans?
I have successfully connected the database to Netbeans but I do not know how to actually use this data. All I have done is to create Entity classes from database and JPA Controller classes from Entity Classes which contains methods for CRUD functionality but I do not know how to use them.
Here is an example of a "Insert" button:
Staff s1 = new Staff();
s1.setStaffId(txtId.getText());
EntityManagerFactory emf = Persistence.createEntityManagerFactory("PCR4B");
StaffJpaController ajc = new StaffJpaController(emf);
try {
ajc.create(s1);
} catch (Exception ex) {
Logger.getLogger(StaffGUI.class.getName()).log(Level.SEVERE, null, ex);
}
Is there anything wrong with this part of code? For some reason I get many errors but I am not sure if they come from this part of code or not.
Thanks in advance.
It's too many errors to display them here. The build is successful though.
I was wondering if there is a way to store the data of a table in an ArrayList and then use the ArrayList to make changes to elements and then store them back to the database?
The JPA controller class provides the CRUD functionality but I do not know how to make use of the functions. I do not understand what the EntityManager is.
I have a created a custom field in Contacts object in Salesforce whose API name is "Resume_Text__c" and I'm making a SOAP call to get the value of that filed using Java Implementation by writing a following SOQL.
SELECT Resume_Text__c FROM Contact
But execution of query throwing following exception.
No such column 'Resume_Text__c' on entity 'Contact'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.'
So how can I access custom field via Soap API Java Implementation?
Whenever you are using Enterprise.wsdl file in your implementation, you need to make sure that every time you create some new fields and object on Salesforce.com environment, you refresh your Enterprise.wsdl to import all the dependency mappings else go with Partner.wsdl.
This might seem like an odd question, but I am trying to get a handle on what the "best practice" is for converting an application that is set up to use something like Roo's or Grails' generation of controllers (which provides basic CRUD functionality) to something that returns a JSON response body instead for use in a JavaScript application.
The ambiguity of technology here is because I haven't actually started the project yet. I am still trying to decide which (Java-based) technology to use, and to see what kind of productivity tools I should learn/use in the process. It will be a web application, and it will use a database persistence layer. All other details are up in the air.
Perhaps the easiest way to accomplish my goal is to develop using some sort of AJAX plugin to start with, but most of the tutorials and descriptions out there say to start with a normal MVC architecture. Roo seems to make conversion of the controllers it generates to JSON-friendly return types difficult, and I'm not familiar enough with Groovy/Grails to know what it takes to do that.
This is mostly a learning experience for me, and I am open to any criticism or advice, but being a Q/A forum, I realize I need to incorporate an objective question of some sort. To fill that need, I ask:
What is the best way to set up an AJAX/RESTful interface for my entities in Roo and/or Grails?
I recently did exactly this with a Grails application and found it surprisingly easy to take the generated controllers and get them to output JSON or XML or the HTML from the view depending on the content negotiation.
The places in the Grails manual to look into are the section(s) on Content Negotiation and, if you need to deal with JSON or XML input, marshaling.
To get JSON and XML output, in the default list() method, changed it to this (I have a Session object, in this case...one of my domain classes):
def list() {
params.max = Math.min(params.max ? params.int('max') : 10, 100)
def response = [sessionInstanceList: Session.list(params), sessionInstanceTotal: Session.count()]
withFormat {
html response
json {render response as JSON}
xml {render response as XML}
}
}
Anywhere you are returning just an object by default, you will want to replace the returned value with the withFormat block.
You also may need to update your Config.groovy file where it deals with mime types. Here's what I have:
grails.mime.file.extensions = true // enables the parsing of file extensions from URLs into the request format
grails.mime.use.accept.header = true
grails.mime.types = [ html: ['text/html','application/xhtml+xml'],
xml: ['text/xml', 'application/xml'],
text: 'text/plain',
js: 'text/javascript',
rss: 'application/rss+xml',
atom: 'application/atom+xml',
css: 'text/css',
csv: 'text/csv',
all: '*/*',
json: ['application/json','text/json'],
form: 'application/x-www-form-urlencoded',
multipartForm: 'multipart/form-data'
]
As input (to an update() or save() action, for example) JSON and XML payloads will automatically be unmarshaled and will be bound just like a form input would be, but I've found that the unmarshaling process is a bit picky (especially with JSON).
I found that, in order for JSON to be handled correctly in the update() method, the class attribute had to be present and correct on the inbound JSON object. Since the library I was using in my client application didn't make this an easy issue to deal with, I switched to using XML instead.