Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration.
I have an entity annotated with #Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand.
However depending on the logged in user, I also want to create additional tables for those user dynamically and be able to insert entries to those tables.
Can somebody guide me to some resources that I can make use of or point me in right direction in how to go about solving these issues. Thanks a lot for help!
The easiest thing to do would be to add the Spring Boot Starter Data Cassandra dependency to your Spring Boot application, like so...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
<version>1.3.5.RELEASE</version>
</dependency>
In addition, this will add the Spring Data Cassandra dependency to your application.
With Spring Data Cassandra, you can configure your application's Keyspace(s) using the CassandraClusterFactoryBean (or more precisely, the subclass... CassandraCqlClusterFactoryBean) by calling the setKeyspaceCreations(:Set) method.
The KeyspaceActionSpecification class is pretty self-explanatory. You can even create one with the KeyspaceActionSpecificationFactoryBean, add it to a Set and then pass that to the setKeyspaceCreations(..) method on the CassandraClusterFactoryBean.
For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra #Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH.
Specifically, you can have your application #Configuration class extend the SD Cassandra AbstractClusterConfiguration class. There, you will find the getEntityBasePackages():String[] method that you can override to provide the package locations containing your application domain object/entity classes, which SD Cassandra will then use to scan for #Table domain object/entities.
With your application #Table domain object/entities properly identified, you set the SD Cassandra SchemaAction to CREATE using the CassandraSessionFactoryBean method, setSchemaAction(:SchemaAction). This will create Tables in your Keyspace for all domain object/entities found during the scan, providing you identified the proper Keyspace on your CassandraSessionFactoryBean appropriately.
Obviously, if your application creates/uses multiple Keyspaces, you will need to create a separate CassandraSessionFactoryBean for each Keyspace, with the entityBasePackages configuration property set appropriately for the entities that belong to a particular Keyspace, so that the associated Tables are created in that Keyspace.
Now...
For the "additional" Tables per user, that is quite a bit more complicated and tricky.
You might be able to leverage Spring Profiles here, however, profiles are generally only applied on startup. If a different user logs into an already running application, you need a way to supply additional #Configuration classes to the Spring ApplicationContext at runtime.
Your Spring Boot application could inject a reference to a AnnotationConfigApplicationContext, and then use it on a login event to programmatically register additional #Configuration classes based on the user who logged into the application. You need to follow your register(Class...) call(s) with an ApplicationContext.refresh().
You also need to appropriately handle the situation where the Tables already exist.
This is not currently supported in SD Cassandra, but see DATACASS-219 for further details.
Technically, it would be far simpler to create all the possible Tables needed by the application for all users at runtime and use Cassandra's security settings to restrict individual user access by role and assigned permissions.
Another option might be just to create temporary Keyspaces and/or Tables as needed when a user logs in into the application, drop them when the user logs out.
Clearly, there are a lot of different choices here, and it boils down more to architectural decisions, tradeoffs and considerations then it does technical feasibility, so be careful.
Hope this helps.
Cheers!
Following spring configuration class creates keyspace and tables if they dont exist.
#Configuration
public class CassandraConfig extends AbstractCassandraConfiguration {
private static final String KEYSPACE = "my_keyspace";
private static final String USERNAME = "cassandra";
private static final String PASSWORD = "cassandra";
private static final String NODES = "127.0.0.1"; // comma seperated nodes
#Bean
#Override
public CassandraCqlClusterFactoryBean cluster() {
CassandraCqlClusterFactoryBean bean = new CassandraCqlClusterFactoryBean();
bean.setKeyspaceCreations(getKeyspaceCreations());
bean.setContactPoints(NODES);
bean.setUsername(USERNAME);
bean.setPassword(PASSWORD);
return bean;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
#Override
protected String getKeyspaceName() {
return KEYSPACE;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.panda"};
}
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
List<CreateKeyspaceSpecification> createKeyspaceSpecifications = new ArrayList<>();
createKeyspaceSpecifications.add(getKeySpaceSpecification());
return createKeyspaceSpecifications;
}
// Below method creates "my_keyspace" if it doesnt exist.
private CreateKeyspaceSpecification getKeySpaceSpecification() {
CreateKeyspaceSpecification pandaCoopKeyspace = new CreateKeyspaceSpecification();
DataCenterReplication dcr = new DataCenterReplication("dc1", 3L);
pandaCoopKeyspace.name(KEYSPACE);
pandaCoopKeyspace.ifNotExists(true).createKeyspace().withNetworkReplication(dcr);
return pandaCoopKeyspace;
}
}
Using #Enes Altınkaya answer:
#Value("${cassandra.keyspace}")
private String keySpace;
#Override
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
return Arrays.asList(
CreateKeyspaceSpecification.createKeyspace()
.name(keySpace)
.ifNotExists()
.withNetworkReplication(new DataCenterReplication("dc1", 3L)));
}
To define your varaibles use an application.properties or application.yml file:
cassandra:
keyspace: yout_keyspace_name
Using config files instead of hardcoded strings you can publish your code on for example GitHub without publishing your passwords and entrypoints (.gitignore files) which may be a security risk.
The following cassandra configuration will create a keyspace when it does not exist and also run the start-up script specified
#Configuration
#PropertySource(value = {"classpath:cassandra.properties"})
#EnableCassandraRepositories
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.keyspace}")
private String cassandraKeyspace;
#Override
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
return Collections.singletonList(CreateKeyspaceSpecification.createKeyspace(cassandraKeyspace)
.ifNotExists()
.with(KeyspaceOption.DURABLE_WRITES, true)
.withSimpleReplication());
}
#Override
protected List<String> getStartupScripts() {
return Collections.singletonList("CREATE TABLE IF NOT EXISTS "+cassandraKeyspace+".test(id UUID PRIMARY KEY, greeting text, occurrence timestamp) WITH default_time_to_live = 600;");
}
}
For table's creation you can use this in the application.properties file
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
This answer is inspired by Viswanath's answer.
My cassandra.yml looks as follows:
spring:
data:
cassandra:
cluster-name: Test Cluster
keyspace-name: keyspace
port: 9042
contact-points:
- 127.0.0.1
#Configuration
#PropertySource(value = { "classpath:cassandra.yml" })
#ConfigurationProperties("spring.data.cassandra")
#EnableCassandraRepositories(basePackages = "info.vishrantgupta.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${keyspacename}")
protected String keyspaceName;
#Override
protected String getKeyspaceName() {
return this.keyspaceName;
}
#Override
protected List getKeyspaceCreations() {
return Collections.singletonList(CreateKeyspaceSpecification
.createKeyspace(keyspaceName).ifNotExists()
.with(KeyspaceOption.DURABLE_WRITES, true)
.withSimpleReplication());
}
#Override
protected List getStartupScripts() {
return Collections.singletonList("CREATE KEYSPACE IF NOT EXISTS "
+ keyspaceName + " WITH replication = {"
+ " 'class': 'SimpleStrategy', "
+ " 'replication_factor': '3' " + "};");
}
}
You might have to customize #ConfigurationProperties("spring.data.cassandra"), if your configuration starts with cassandra in cassandra.yml file then use #ConfigurationProperties("cassandra")
Related
I have an authentication module which is imported inside our projects to provide authentication related APIs.
AppConfig.java
#Configuration
#ComponentScan({"com.my.package.ldap.security"})
#EnableCaching
#EnableRetry
public class ApplicationConfig {
...
}
I've configured Swagger/OpenAPI in my projects and I wish to find a way to manage these imported endpoints:
Specifically, I wish to set an order on the Example object's fields. Right now it is sorted alphabetically by default.
The reason for doing that is because a lot of these fields are "optional" and we have to remove these fields every time from the example object in order to authenticate a user which is a waste of time.
I've tried annotating the Object with #JsonPropertyOrder but it makes no change:
#JsonPropertyOrder({
"domain",
"username",
"password"
})
Is there any way to achieve that?
I made a small POC. It isn't pretty or very extendible, but it does work as intended. Perhaps one could make it more flexible, re-using the property position on the metadata object, but this example does not include that. This way you can loop definitions and models, manually doing the work that the framework fails to do at the moment.
Also, be sure not to make this too heavy because it will be executed every time someone opens up the swagger documentation. It's a piece of middleware that transforms the original Swagger API definition structure. It does not change the original one.
#Order(SWAGGER_PLUGIN_ORDER)
public class PropertyOrderTransformationFilter implements WebMvcSwaggerTransformationFilter {
#Override
public Swagger transform(final SwaggerTransformationContext<HttpServletRequest> context) {
Swagger swagger = context.getSpecification();
Model model = swagger.getDefinitions().get("applicationUserDetails");
Map<String, Property> modelProperties = model.getProperties();
// Keep a reference to the property definitions
Property domainPropertyRef = modelProperties.get("domain");
Property usernamePropertyRef = modelProperties.get("username");
Property passwordPropertyRef = modelProperties.get("password");
// Remove all entries from the underlying linkedHashMap
modelProperties.clear();
// Add your own keys in a specific order
Map<String, Property> orderedPropertyMap = new LinkedHashMap<>();
orderedPropertyMap.put("domain", domainPropertyRef);
orderedPropertyMap.put("username", usernamePropertyRef);
orderedPropertyMap.put("password", passwordPropertyRef);
orderedPropertyMap.put("..rest..", otherPropertyRef);
model.setProperties(orderedPropertyMap);
return swagger;
}
#Override
public boolean supports(final DocumentationType documentationType) {
return SWAGGER_2.equals(documentationType);
}
}
#Configuration
class SwaggerConf {
#Bean
public PropertyOrderTransformationFilter propertyOrderTransformationFilter () {
return new PropertyOrderTransformationFilter ();
}
}
I am trying to figure out how to easily use spring state machine including persistence with JPA.
This is the problem I am dealing with:
Incompatible data types - factory and persistence
At a certain point in the program I would like to use the state machine which is connected to a user. There are repositories for that purpose (project spring-statemachine-data-jpa).
At first there is a check if a state machine already exists for a player, using the repository. If not, creating a new state machine and persist it.
The problem is that I have different types of state machines. The factory creates a StateMachine<UserState, UserEvent>, the repository returns a JpaRepositoryStateMachine. These are not compatible to each other and for me it is not clear how to persist / create / restore the state machines.
Can you please clarify that for me?
#Autowired
private StateMachineRepository<JpaRepositoryStateMachine> repository;
public someMethod(User user) {
Optional<JpaRepositoryStateMachine> stateMachine = repository.findById(user.getId()); // JPA state machine
if(stateMachine.isEmpty()) {
StateMachine<UserState, UserEvent> createdStateMachine = factory.getStateMachine(user.getId()); // spring state machine
repository.save(createdStateMachine); // compile error
}
// here: ready-to-use statemachine - how?
}
Thanks for your help!
Try to use SpringStateMachineService to get a state machine instance instead of explicitly retrieving it from repository or factory. You can use default implementation provided by Spring:
#Bean
public StateMachineService<State, Event> stateMachineService(
final StateMachineFactory<State, Event> stateMachineFactory,
final StateMachinePersist<WorkflowState, WorkflowEvent, String> stateMachinePersist) {
return new DefaultStateMachineService<>(stateMachineFactory, stateMachinePersist);
}
So, your code will look like:
#Autowired
private StateMachineService<State, Event> stateMachineService;
public someMethod(User user) {
StateMachine<State, Event> stateMachine = stateMachineService.acquireStateMachine(user.getId(), false);
// here: ready-to-use statemachine - call stateMachine.start() for example
}
If you go inside the acquireStateMachine method you can see that it queries state machine from repository by id and creates a new one if nothing found.
You can use JpaPersistingStateMachineInterceptor to implicitly save and update state machine instance on every change.
#Bean
public JpaPersistingStateMachineInterceptor<State, Event, String>
jpaPersistingStateMachineInterceptor() {
return new JpaPersistingStateMachineInterceptor(stateMachineRepository);
}
See Persisting State Machine
I am using java boot for my development. For now I have used 'EhCache' for caching , it is directly supported from Java boot. This is "in-process" cache, i.e., becomes part of your process. It is okay for now. But my server will run on multiple nodes in near future. Hence want to switch to 'Memcached' as common caching layer.
After spending good amount of time, I could not get good sample of using Memcached from java boot. I have looked at 'Simple Spring Memcached' which comes close to my requirement. But still it gives example using XML configuration in Spring way. Java boot does not use such XML configuration as far as possible. At least I could not map the example quickly to java boot world.
I want to use Memcahed ( directly or via cache-abstraction-layer) from java boot. If anybody points me to a relevant java boot example, it will save a lot of time for me.
You could also check Memcached Spring Boot library. It uses Memcached implementation for Spring Cache Abstraction.
In other words you use the same configuration and same annotations as you would use with any other Spring Cache implementation. You can check out here the usage of the library.
There are also example projects in Kotlin and Java.
I have already accepted answer given by #ragnor. But I think I should post a complete example here which has worked for me.
Make sure you have cache-enabled for your application by adding #EnableCaching
POM.xml should have following dependency:
<dependency>
<groupId>com.google.code.simple-spring-memcached</groupId>
<artifactId>spring-cache</artifactId>
<version>3.6.1</version>
</dependency>
<dependency>
<groupId>com.google.code.simple-spring-memcached</groupId>
<artifactId>spymemcached-provider</artifactId>
<version>3.6.1</version>
</dependency>
Add a config file to configure your memcached cache configuration, say MySSMConfig.java
#Configuration
#EnableAspectJAutoProxy
#ImportResource("simplesm-context.xml") // This line may or may not be needed,
// not sure
public class SSMConfig
{
private String _memcachedHost; //Machine where memcached is running
private int _memcachedPort; //Port on which memcached is running
#Bean
public CacheManager cacheManager()
{
//Extended manager used as it will give custom-expiry value facility in future if needed
ExtendedSSMCacheManager ssmCacheManager = new ExtendedSSMCacheManager();
//We can create more than one cache, hence list
List<SSMCache>cacheList = new ArrayList<SSMCache>();
//First cache: Testcache
SSMCache testCache = createNewCache(_memcachedHost, _memcachedPort,
"testcache", 5);
//One more dummy cache
SSMCache dummyCache = createNewCache(_memcachedHost,_memcachedPort,
"dummycache", 300);
cacheList.add(testCache);
cacheList.add(dummyCache);
//Adding cache list to cache manager
ssmCacheManager.setCaches(cacheList);
return ssmCacheManager;
}
//expiryTimeInSeconds: time(in seconds) after which a given element will expire
//
private SSMCache createNewCache(String memcachedServer, int port,
String cacheName, int expiryTimeInSeconds)
{
//Basic client factory to be used. This is SpyMemcached for now.
MemcacheClientFactoryImpl cacheClientFactory = new MemcacheClientFactoryImpl();
//Memcached server address parameters
//"127.0.0.1:11211"
String serverAddressStr = memcachedServer + ":" + String.valueOf(port);
AddressProvider addressProvider = new DefaultAddressProvider(serverAddressStr);
//Basic configuration object
CacheConfiguration cacheConfigToUse = getNewCacheConfiguration();
//Create cache factory
CacheFactory cacheFactory = new CacheFactory();
cacheFactory.setCacheName(cacheName);
cacheFactory.setCacheClientFactory(cacheClientFactory);
cacheFactory.setAddressProvider(addressProvider);
cacheFactory.setConfiguration(cacheConfigToUse);
//Get Cache object
Cache object = null;
try {
object = cacheFactory.getObject();
} catch (Exception e) {
}
//allow/disallow remove all entries from this cache!!
boolean allowClearFlag = false;
SSMCache ssmCache = new SSMCache(object, expiryTimeInSeconds, allowClearFlag);
return ssmCache;
}
private CacheConfiguration getNewCacheConfiguration()
{
CacheConfiguration ssmCacheConfiguration = new CacheConfiguration();
ssmCacheConfiguration.setConsistentHashing(true);
//ssmCacheConfiguration.setUseBinaryProtocol(true);
return ssmCacheConfiguration;
}
}
OK, we are ready to use our configured cache.
Sample methods in some other class to read from cache and to remove from cache
#Cacheable(value="dummycache, key="#givenId.concat('-dmy')", unless="#result == null")
public String getDummyDataFromMemCached(String givenId)
{
logger.warn("getDummyDataFromMemCached: Inside DUMMY method to actually get data");
return "Sample-" + String.valueOf(givenId);
}
#CacheEvict(value="dummycache",key="#givenId.concat('-dmy')")
public void removeDummyDataFromMemCached(String givenId)
{
//Do nothing
return;
}
Note that we have added suffix to the kache-keys. As Memcached does not support cache-zones, "dummycache" and "testcache" ultimately does not remain separate on a single server. (They may remain separate with some other cache implementation). Hence to avoid conflict, we append unique suffix to the cache-key.
If you want to cache objects of your own class, then make sure that they are serializable. Just change your class definition to 'XYZ implements Serializable'.
You can find some materials how to configure SSM using Java configuration instead of XML files here and here.
Basically you have to move definitions of all beans from XML to Java.
It's RESTful web app. I am using Hibernate Envers to store historical data. Along with revision number and timestamp, I also need to store other details (for example: IP address and authenticated user). Envers provides multiple ways to have a custom revision entity which is awesome. I am facing problem in setting the custom data on the revision entity.
#RevisionEntity( MyCustomRevisionListener.class )
public class MyCustomRevisionEntity extends DefaultRevisionEntity {
private String userName;
private String ip;
//Accessors
}
public class MyCustomRevisionListener implements RevisionListener {
public void newRevision( Object revisionEntity ) {
MyCustomRevisionEntity customRevisionEntity = ( MyCustomRevisionEntity ) revisionEntity;
//Here I need userName and Ip address passed as arguments somehow, so that I can set them on the revision entity.
}
}
Since newRevision() method does not allow any additional arguments, I can not pass my custom data (like username and ip) to it. How can I do that?
Envers also provides another approach as:
An alternative method to using the org.hibernate.envers.RevisionListener is to instead call the getCurrentRevision( Class revisionEntityClass, boolean persist ) method of the org.hibernate.envers.AuditReader interface to obtain the current revision, and fill it with desired information.
So using the above approach, I'll have to do something like this:
Change my current dao method like:
public void persist(SomeEntity entity) {
...
entityManager.persist(entity);
...
}
to
public void persist(SomeEntity entity, String userName, String ip) {
...
//Do the intended work
entityManager.persist(entity);
//Do the additional work
AuditReader reader = AuditReaderFactory.get(entityManager)
MyCustomRevisionEntity revision = reader.getCurrentRevision(MyCustomRevisionEntity, false);
revision.setUserName(userName);
revision.setIp(ip);
}
I don't feel very comfortable with this approach as keeping audit data seems a cross cutting concern to me. And I obtain the userName and Ip and other data through HTTP request object. So all that data will need to flow down right from entry point of application (controller) to the lowest layer (dao layer).
Is there any other way in which I can achieve this? I am using Spring.
I am imagining something like Spring keeping information about the 'stack' to which a particular method invocation belongs. So that when newRevision() in invoked, I know which particular invocation at the entry point lead to this invocation. And also, I can somehow obtain the arguments passed to first method of the call stack.
One good way to do this would be to leverage a ThreadLocal variable.
As an example, Spring Security has a filter that initializes a thread local variable stored in SecurityContextHolder and then you can access this data from that specific thread simply by doing something like:
SecurityContext ctx = SecurityContextHolder.getSecurityContext();
Authorization authorization = ctx.getAuthorization();
So imagine an additional interceptor that your web framework calls that either adds additional information to the spring security context, perhaps in a custom user details object if using spring security or create your own holder & context object to hold the information the listener needs.
Then it becomes a simple:
public class MyRevisionEntityListener implements RevisionListener {
#Override
public void newRevision(Object revisionEntity) {
// If you use spring security, you could use SpringSecurityContextHolder.
final UserContext userContext = UserContextHolder.getUserContext();
MyRevisionEntity mre = MyRevisionEntity.class.cast( revisionEntity );
mre.setIpAddress( userContext.getIpAddress() );
mre.setUserName( userContext.getUserName() );
}
}
This feels like the cleanest approach to me.
It is worth noting that the other API getCurrentRevision(Session,boolean) was deprecated as of Hibernate 5.2 and is scheduled for removal in 6.0. While an alternative means may be introduced, the intended way to perform this type of logic is using a RevisionListener.
I am attempting to use Unitils to assist me in Database testing. I would like to use the Unitils/DBMaintain functionality for disabling constraints. However there is a few problems with this. I do not wish to use DBMaintain to create my databases for me however I wish to use its constraint disabling functionality. I was able to achieve this through the use of a custom module listed below:
public class DisableConstraintModule implements Module {
private boolean disableConstraints = false;
public void afterInit() {
if (disableConstraints) {
DatabaseUnitils.disableConstraints();
}
}
public void init(Properties configuration) {
disableConstraints = PropertyUtils.getBoolean("Database.disableConstraints", false, configuration);
}
}
This partially solves what I want however I wish to be able to only disable constraints for tables I will be using in my test. My tests will be running against a database with multiple schemas and each schema has hundreds of different tables. DatabaseUnitils.disableConstraints() disables the constraints for every table in every schema which would be far too time consuming and is unnecessary.
Upon searching the dbmaintain code I found that the Db2Database class does indeed contain a function for disabling constraints on a specific schema and table name basis however this method is protected. I could access this be either extending the Db2Database class or using reflection.
Next I need to be able to determine which schemas and tables I am interested in. I could do this by observing the #DataSet annotation to determine which schemas and tables are important based on what is in the xml. In order to do this I need to override the TestListener so I can instruct it to disable the constraints using the xml before it attempts to insert the dataset. This was my attempt at this:
public class DisableConstraintModule extends DbUnitModule {
private boolean disableConstraints = false;
private TableBasedConstraintsDisabler disabler;
public void afterInit() {
}
public void init(Properties configuration) {
disableConstraints = PropertyUtils.getBoolean("Database.disableConstraints", false, configuration);
PropertyUtils.getInstance("org.unitils.dbmaintainer.structure.ConstraintsDisabler.implClassName", configuration);
}
public void disableConstraintsForDataSet(MultiSchemaDataSet dataSet) {
disabler.disableConstraints(dataSet);
}
protected class DbUnitCustomListener extends DbUnitModule.DbUnitListener {
#Override
public void beforeTestSetUp(Object testObject, Method testMethod) {
disableConstraintsForDataSet(getDataSet(testMethod, testObject));
insertDataSet(testMethod, testObject);
}
}
}
This is what I would like to do however I am unable to get the #DataSet annotation to trigger my DbUnitCustomListener and instead it calls the default DBUnitModule DbUnitListener. Is there anyway for me to override which listener gets called when using the #DataSet annotation or is there a better approach all together for disabling constraints on a specific schema and table level for a DB2 Database?
Thanks
You have to tell Unitils to use your subclass of DbUnitModule. You do this using the unitils.module.dbunit.className property in your unitils.properties file. It sounds like you've got this part figured out.
The second part is to override DbUnitModule's getTestListener() in order to return your custom listener.
See this post for an example.