I have a table with a column of Date type. Let's say table column is:
REPORTING_CONVERSION_DATE NOT NULL DATE
DAO class (which implements Serializable) has the corresponding field defined as:
#Temporal(TemporalType.TIMESTAMP)
#Column(name = "REPORTING_CONVERSION_DATE")
private Date reportingConversionDate;
I extract the record from database, use com.fasterxml.jackson.databind.ObjectMapper to get json string representation of the object, compress the json and store this in Cosmos DB in Azure. During the reverse leg, I get the record from Cosmos DB, decompress it, use ObjectMapper to read the java object back.
I found that on one machine(the code is running on springboot embedded Tomcat with jar as the packaging type) I see this:
reportingConversionDate : 1424483105000
while on other machine (the code is running on managed Tomcat with war as the packaging type) I see
reportingConversionDate : 2015-02-21T01:45:05.000+0000
Why is this different behaviour?
Related
I'm not sure what the root issue is but this is the most basic I can make the problem. When I run something through kafka, and my streaming job picks it up, it runs through the entire process up until it's time to save it out to Cassandra, at which point it hangs. Any and all help is appreciated, been banging my head against this for too long
Snippets showing the basic problem below.
StreamingJob.java:
final DataStream<Pojo> stream = env.addSource(source)
.process(new MyProcess());
CassandraSink.addSink(stream).setClusterBuilder(new ClusterBuilder() {
#Override
protected Cluster buildCluster(Cluster.Builder builder) {
return builder.withCredentials("","")
.addContactPoint("127.0.0.1").withPort(9042).build();
}
})
.setMapperOptions(() -> new Mapper.Option[]{Mapper.Option.saveNullFields(false)})
.setDefaultKeyspace("my_keyspace").build();
env.execute(jobConfig.getName());
MyProcess.java
#Override
Pojo myPojo = doSomethingtoMyInput();
out.collect(myPojo);
//Debugging this proves it works to this point
MyPojo.java
#Table(keyspace = "my_keyspace", name="my_table")
public class MyPojo {
#PartitionKey(0)
#Column
String user_id;
#PartitionKey(1)
#Column
String other_id;
#ClusteringColumn
#Column
java.util.Date time_id;
//Getters and setters using standard notation
}
My cassandra schema
CREATE TABLE my_table (user_id text,
other_id text,
time_idtimestamp,
PRIMARY KEY ((user_id, other_id), time_id)
) WITH CLUSTERING ORDER BY (time_id DESC)
You'll need to verify the format of the time_id in the source as it might not be compatible with the CQL column.
In your POJO, you've mapped it to java.util.Date and if the field from the source does contain a date then it might be the reason it's not working.
CQL timestamp is a 64-bit signed int that represents the number of milliseconds since Unix epoch. The value of the field from the source can either be (a) an integer, or (b) a literal string that looks like yyyy-mm-dd HH:mm. The list of valid ISO 8601 formats is available here -- CQL timestamp. Cheers!
Found the answer after much fighting. Flink and Cassandra is a very strict and tenuous connection. Everything must be perfectly aligned, decimal in Cassandra requires Decimal in Java, and more confusingly, the timestamp in Cassandra would only work with a long value in Java.
Hopefully this helps others who come across this same issue.
Edit: Solution: Upgrading to ISIS 1.17.0 and setting the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false solved the first two problems.
I am using Apache ISIS 1.16.2 and I try to store Blob/Clob content in a MariaDB database (v10.1.35). Therefore, I use the DB connector org.mariadb.jdbc.mariadb-java-client (v2.3.0) and in the code the #Persistent annotation as shown in many examples and the ISIS documentation.
Using the code below, I just get one single column named content_name (in which the Blob object is serialized in binary form) instead of the three columns content_name, content_mimetype and content_bytes.
This is the Document class with the Blob field content:
#PersistenceCapable(identityType = IdentityType.DATASTORE)
#DatastoreIdentity(strategy = IdGeneratorStrategy.IDENTITY, column = "id")
#DomainObject(editing = Editing.DISABLED, autoCompleteRepository = DocumentRepository.class, objectType = "Document")
#Getter
// ...
public class Document implements Comparable<Document> {
#Persistent(
defaultFetchGroup = "false",
columns = {
#Column(name = "content_name"),
#Column(name = "content_mimetype"),
#Column(name = "content_bytes",
jdbcType = "BLOB",
sqlType = "LONGVARBINARY")
})
#Nonnull
#Column(allowsNull = "false")
#Property(optionality = Optionality.MANDATORY)
private Blob content;
// ...
#Column(allowsNull = "false")
#Property
private Date created = new Date();
public Date defaultCreated() {
return new Date();
}
#Column(allowsNull = "true")
#Property
#Setter
private String owner;
// ...
}
This creates the following schema for the DomainObject class Document with just one column for the Blob field:
CREATE TABLE `document` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`content_name` mediumblob,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`owner` varchar(255) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Normally, the class org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisBlobMapping of the ISIS framework should do the mapping. But it seems that this Mapper is somehow not involved...
1. Question: How do I get the Blob field being split up in the three columns (as described above and in many demo projects). Even if I switch to HSQLDB, I still get only one column, so this might not be an issue with MariaDB.
2. Question: If I use a Blob/Clob field in a class that inherits from another DomainObject class, I often get a org.datanucleus.exceptions.NucleusException (stack trace see below) and I cannot make head or tail of it. What are potential pitfalls when dealing with inheritance? Why am I getting this exception?
3. Question: I need to store documents belonging to domain objects (as you might have guessed). The proper way of doing so would be to store the documents in a file system tree instead of a database (which also has by default some size limitations for object data) and reference the files in the object. In the Datanucleus documentation I found the extension serializeToFileLocation that should do exactly that. I tried it by adding the line #Extension(vendorName="datanucleus", key="serializeToFileLocation" value="document-repository") to the Blob field, but nothing happened. So my question is: Is this Datanucleus extension compatible with Apache Isis?
If this extension conflicts with Isis, would it be possible to have a javax.jdo.listener.StoreLifecycleListener or org.apache.isis.applib.AbstractSubscriber that stores the Blob on a file system before persisting the domain object to database and restoring it before loading? Are there better solutions available?
That's it for now. Thank you in advance! ;-)
The stack trace to question 2:
... (other Wicket related stack trace)
Caused by: org.datanucleus.exceptions.NucleusException: Creation of SQLExpression for mapping "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" caused error
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:199)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl.findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager.getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel.getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider.size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase.internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView.getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView.getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView.onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater.onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView.onBeforeRender(AbstractPageableView.java:115)
at org.apache.wicket.Component.internalBeforeRender(Component.java:950)
at org.apache.wicket.Component.beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer.onBeforeRenderChildren(MarkupContainer.java:1825)
... 81 more
Caused by: org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" since not supported
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:189)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler#locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl#findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager#getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel#getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider#size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase#internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView#getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView#getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView#onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater#onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView#onBeforeRender(AbstractPageableView.java:115)
// <-- 8 times the following lines
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer#onBeforeRenderChildren(MarkupContainer.java:1825)
at org.apache.wicket.Component#onBeforeRender(Component.java:3916)
// -->
at org.apache.wicket.Page#onBeforeRender(Page.java:801)
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.Component#internalPrepareForRender(Component.java:2236)
at org.apache.wicket.Page#internalPrepareForRender(Page.java:242)
at org.apache.wicket.Component#render(Component.java:2325)
at org.apache.wicket.Page#renderPage(Page.java:1018)
at org.apache.wicket.request.handler.render.WebPageRenderer#renderPage(WebPageRenderer.java:124)
at org.apache.wicket.request.handler.render.WebPageRenderer#respond(WebPageRenderer.java:195)
at org.apache.wicket.core.request.handler.RenderPageRequestHandler#respond(RenderPageRequestHandler.java:175)
at org.apache.wicket.request.cycle.RequestCycle$HandlerExecutor#respond(RequestCycle.java:895)
at org.apache.wicket.request.RequestHandlerStack#execute(RequestHandlerStack.java:64)
at org.apache.wicket.request.cycle.RequestCycle#execute(RequestCycle.java:265)
at org.apache.wicket.request.cycle.RequestCycle#processRequest(RequestCycle.java:222)
at org.apache.wicket.request.cycle.RequestCycle#processRequestAndDetach(RequestCycle.java:293)
at org.apache.wicket.protocol.http.WicketFilter#processRequestCycle(WicketFilter.java:261)
at org.apache.wicket.protocol.http.WicketFilter#processRequest(WicketFilter.java:203)
at org.apache.wicket.protocol.http.WicketFilter#doFilter(WicketFilter.java:284)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.isis.core.webapp.diagnostics.IsisLogOnExceptionFilter#doFilter(IsisLogOnExceptionFilter.java:52)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.shiro.web.servlet.AbstractShiroFilter#executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1#call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable#doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable#call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject#execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter#doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter#doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
// ... some Jetty stuff
at java.lang.Thread#run(Thread.java:748)
After some research, I think that the problem of question 1 and 2 seems to be related to this ISIS bug report #1902.
In short: The datanucleus extension plugin resolving mechanism does not seem to find the ISIS value type adapters and therefore cannot know how to serialize ISIS's Blob/Clob types.
According to the aforementioned ISIS bug report, this problem is fixed in 1.17.0, so I am trying to upgrade from 1.16.2 to this version (which introduced many other problems, but that will be an extra topic).
For question 3 I have found Minio which addresses basically my problem, but it is a bit oversized for my needs. I will keep looking for other solutions to store Blob/Clobs to local file system and will keep Minio in mind...
UPDATE:
I upgraded my project to ISIS version 1.17.0. It solved my problem in question 1 (now I get three columns for a Blob/Clob object).
The problem in question 2 (NucleusException) is not solved by the upgrade. I figured out that it is only thrown if returning a list of DomainObjects with Blob/Clob field(s), i.e. if rendered as standalone table. If I get directly to an object's entity view, no exception is thrown and I can see/modify/download the Blob/Clob content.
In the meantime, I wrote my own datanucleus plugin, that stores the Blobs/Clobs as files on a file system.
UPDATE 2
I found a solution for circumventing the org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisClobMapping" since not supported. It seems to be a problem with bulk-loading (but I do not know any details).
Deactivating bulk-load via the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false (which is initially set to true by ISIS archetypes) solved the problem.
For now I have a CSV with several columns in rows. Eventually, I will have a SQL relational database structure. I was wondering if there are any libraries to easily extract this data into a list of java objects.
Example:
title | location | date
EventA | los angeles, ca | 05-29-2014
EventB | New York, NY | 08-23-2013
This is the structure of the data in csv. I would have a java object called Event:
Event(String title, String location, String Date)
I am aware of openCSV. Is that would I need to use for csv? If that is the case, what is the different solution for a SQL relational database?
Also, does can reading a csv only be done in the main method?
For when you convert to the SQL database, you can use Apache's dbutils for a low-level solution, or Hibernate for a high-level solution.
dbutils
You can implement a ResultSetHandler to convert a result set into an object or if its a POJO the framework can convert it for you. There are examples on the apache site.
http://commons.apache.org/proper/commons-dbutils/
Hibernate
There are plenty of tutorials out there for working with Hibernate.
http://www.hibernate.org/
Try JSefa, which allows you to annotate Java classes that can be used in a serialization and de-serialization process.
From the tutorial:
The annotations for CSV are similar to the XML ones.
#CsvDataType()
public class Person {
#CsvField(pos = 1)
String name;
#CsvField(pos = 2, format = "dd.MM.yyyy")
Date birthDate;
}
Serialization
Serializer serializer = CsvIOFactory.createFactory(Person.class).createSerializer();
This time we used the super interface Serializer, so that we can abstract from the choosen format type (XML, CSV, FLR) in the following code.
The next should be no surprise:
serializer.open(writer);
// call serializer.write for every object to serialize
serializer.close(true);
The result
Erwin Schmidt;23.05.1964
Thomas Stumm;12.03.1979
In my web application, I am saving some Text message in a COLUMN of a DB table(Oracle). Earlier the VARCHAR2 length(maximum length) is (500 BYTE). Now the maximum length is increased to 4000 characters. So I need to add a 'CLOB' field in my domain class.
Can anyone please clarify what are the steps need to be followed in order to create a CLOB field in my domain class. I also have CLOB cloumn in my DB.
(What is the command/syntax to start with?)
Adding CLOB at database
and adding the below code in the domain class is the solution.
#Lob
private String message;
If you need to create a CLOB field named message then you would run the following command on the Roo Shell:
field string --fieldName message --lob true
I have an action in struts2 that will query the database for an object and then copy it with a few changes. Then, it needs to retrieve the new objectID from the copy and create a file called objectID.txt.
Here is relevant the code:
Action Class:
ObjectVO objectVOcopy = objectService.searchObjects(objectId);
//Set the ID to 0 so a new row is added, instead of the current one being updated
objectVOcopy.setObjectId(0);
Date today = new Date();
Timestamp currentTime = new Timestamp(today.getTime());
objectVOcopy.setTimeStamp(currentTime);
//Add copy to database
objectService.addObject(objectVOcopy);
//Get the copy object's ID from the database
int newObjectId = objectService.findObjectId(currentTime);
File inboxFile = new File(parentDirectory.getParent()+"\\folder1\\folder2\\"+newObjectId+".txt");
ObjectDAO
//Retrieve identifying ID of copy object from database
List<ObjectVO> object = getHibernateTemplate().find("from ObjectVO where timeStamp = ?", currentTime);
return object.get(0).getObjectId();
The problem is that more often than not, the ObjectDAO search method will not return anything. When debugging I've noticed that the Timestamp currentTime passed to it is usually about 1-2ms off the value in the database. I have worked around this bug changing the hibernate query to search for objects with a timestamp within 3ms of the one passed, but I'm not sure where this discrepancy is coming from. I'm not recalculating the currentTime; I'm using the same one to retrieve from the database as I am to write to the database. I'm also worried that when I deploy this to another server the discrepancy might be greater. Other than the objectID, this is the only unique identifier so I need to use it to get the copy object.
Does anyone know why this is occuring and is there a better work around than just searching through a range? I'm using Microsoft SQL Server 2008 R2 btw.
Thanks.
Precision in SQL Server's DATETIME data type does not precisely match what you can generate in other languages. SQL Server rounds to the nearest 0.003 - this is why you can say:
DECLARE #d DATETIME = '20120821 23:59:59.997';
SELECT #d;
Result:
2012-08-21 23:59:59.997
Then try:
DECLARE #d DATETIME = '20120821 23:59:59.999';
SELECT #d;
Result:
2012-08-22 00:00:00.000
Since you are using SQL Server 2008 R2, you should make sure to use the DATETIME2 data type instead of DATETIME.
That said, #RedFilter makes a good point - why are you relying on the time stamp when you can use the generated ID instead?
This feels wrong.
Other than the objectID, this is the only unique identifier
Databases have the concept of a unique identifier for a reason. You should really use that to retrieve an instance of your object.
You can use the get method on the Hibernate session and take advantage of the session and second level caches as well.
With your approach you execute a query everytime you retrieve your object.