I need to use the environment in my sling model (dev|prod etc, not runmode such as author|publish). How can I inject it into my core model?
Is there any service for this ?
dev|prod etc, not runmode such as author|publish
author and publish are among the fixed run modes but run modes in general can be used to tell dev from prod (or similar kinds of environments) as well.
Usually, when AEM environments need to be told apart, e.g. dev vs prod, it is realised through custom run modes. While AEM as a Cloud Service places some limitations on just how much you can customise, the case you mention is still covered OOTB. Among other things, it can be used to manage environment-specific OSGi config.
An on-premise/hosted deployment gives you even more flexibility. I've always used customized run modes for this kind of purpose.
One thing to note is that it does raise an eyebrow that you need to programmatically check the run mode in a Sling Model. I'm not sure what you're implementing but if a piece of functionality is dependent on the environment, I'd rather handle it via alternative OSGi configuration assigned to a given run mode. I think it's generally easier to add another configuration as a sling:OsgiConfig node when required, as opposed to adjusting conditional logic in a Java class that only recognises a predetermined set of environments.
Provided that you have a set of run modes like this, you could inject SlingSettingsService into your model and read the run modes this way. Or you could write an OSGi service to encapsulate whatever logic you need. Such a service would start up with the configuration relevant to a given environment and you could inject it directly into your Sling model, knowing the values it returns are the ones you need.
Related
I have been evaluating the support of Spring Boot 3 for native compiling with native-image and so far I am very impressed. It has the potential to drastically reduce our cloud spend.
I note in the documentation, one of the major changes is the "closed world" assumption, meaning things like #ConditionalOn... are evaluated at build time rather than compile time.
To give a common example, I have an EmailSender interface and I have three implementations of this, for example LoggingEmailSender, SmtpEmailSender and SendGridEmailSender. Currently when running on the JVM the correct Bean for the environment will be created, LoggingEmailSender for local, SmtpEmailSender for test and SendGridEmailSender for production.
I am unsure what the best approach is to migrate this kind of conditional logic over to a Spring native way of doing things.
Currently the only option I can see would be to compile a binary per environment and use scopes at build and runtime to enable different implementations. This would result in a few different Docker images being created and deployed to their respective environments which breaks the previous conventions of promoting the same containers through various QA, Staging and then Production environments.
Is there a recommend strategy that I have overlooked or is this the best option available with the current level of maturity of native support?
As far as I know you can override properties for Spring Boot native applications too based on the run environment. And isn’t it possible you would rethink your bean creation a bit only. Drop the bean(s) conditional creation and have one create method for EmailSender only. In this method you should be able to create different implementations for this interface, it is interface right, based on property value you read too.
I'm adding unit-tests to an existing codebase, and the application itself retrieves data from a server through REST. The URL to the server is hard-coded in the application.
However, developers are obviously not testing new features, bugs, etc on a live environment, but rather on a development-server. To acomplish this, the developement-build have a different "server-url"-string than the production-build.
During developement a non-production-url should be enforced; and when creating a production build, a production-url should be inforced instead.
I'm looking for advice on how to implement a neat solution for this, since missing to change the url can currently have devastating outcomes.
A maven build script only tests the production-value, and not both. I haven't found any way to make build-specific unit-tests (Technologies used: Java, Git, Git-flow, Maven, JUnit)
Application configuration is an interesting topic. What you've pointed out here as an issue is definitely a very practical need, but even more so, if you need to repackage (and possibly build) between different environments, how do you truly know that what you've got there is the same that was actually tested and verified.
So load the configuration from a resource outside of the application package. Java option to a file on filesystem or a JNDI resource are both good options. You can also have defaults for the development by commiting a config file and reading from there if the Java option is not specified.
I've searched on internet and here on SO, but couldn't wrap my mind around the various options.
What I need is a way to be able to introduce customer specific customization in any point of my app, in an "external" way, external as in "add drop-in jar and get the customized behavior".
I know that I should implement some sort of plugin system, but I really don't know where to start.
I've read some comment about spring, osgi, etc, can't figure out what is the best approach.
Currently, I have a really simple structure like this:
com.mycompany.module.client.jar // client side applet
com.mycompany.module.server.jar // server side services
I need a way of doing something like:
1) extend com.mycompany.module.client.MyClass as com.mycompany.module.client.MyCustomerClass
2) jar it separately from the "standard" jars: com.mycompany.customer.client.jar
3) drop in the jar
4) start the application, and have MyCustomerClass used everywhere the original MyClass was used.
Also, since the existing application is pretty big and based on a custom third-party framework, I can't introduce devastating changes.
Which is the best way of doing this?
Also, I need the solution to be doable with java 1.5, since the above mentioned third-party framework requires it.
Spring 3.1 is probably the easiest way to go about implementing this, as their dependency injection framework provides exactly what you need. With Spring 3.1's introduction of Bean Profiles, separating concerns can be even easier.
But integrating Spring into an existing project can be challenging, as there is some core architecture that must be created. If you are looking for a quick and non-invasive solution, using Spring containers programmatically may be an ideal approach.
Once you've initialized your Spring container in your startup code, you can explicitly access beans of a given interface by simply querying the container. Placing a single jar file with the necessary configuration classes on the classpath will essentially automatically include them.
Personalization depends on the application design strongly. You can search for a pluggable application on the Internet and read a good article (for an example: http://solitarygeek.com/java/a-simple-pluggable-java-application). In the pluggable application, you can add or remove a feature that a user decides. A way for the pluggable application is using the Interface for de-coupling of API layer and its implementation.
There is a good article in here
User personalisation is something which needs to be in the design. What you can change as an after thought if the main body of code cannot be changed, is likely to be very limited.
You need to start be identifying what can be changed on a per user basis. As it appears this cannot be changed, this is your main limiting factor. From this list determine what would be useful to change and implement this.
One option for building is to package the environment-specific properties at build time (for example using maven profiles)
Another option is to set -Denv=production on your production environment, and on startup load the /${env}/config.properties. (spring allows that for example, but it can be done manually)
I've used both. The former means no additional environment configuration. The latter allows for using the same build on multiple environments.
The question: any other significant pros/cons, or is it virtually the same which approach will be chosen?
Related: Load environment-specific properties for use with PropertyPlaceholderConfigurer?
In my opinion having different outputs per environment is a major downside, as it means you need to build N copies of the app, run the same build commands N times, etc. It's too easy to run into mistakes where you gave the "dev" version to the QA site, etc.
There's a bit of a third option in between which I am a fan of - storing the configuration values on the servers themselves, separate from the application, and the application then either is written to know where to find these configuration files or you have some sort of script which "re-configures" the app by replacing tokens in it's configuration files with the canonical values from the external files.
This way you can ship the same binary for all environments, and the external configurations can easily be placed under source control (for example, one file per environment) so changes can be audited, changes can be propagated automatically, etc.
This is also a convenient option if you work in a large organization where the developers are separate from the group that "operates" the application or is responsible for the different environments - since with this method, developers can know what to configure, but the other group is responsible for what configuration values to supply on each host.
I have 2 builds, one that generates the binary (a war file, without any server specific configuration) and another project which generates some property files for each environment environment.
The deployment process takes the war and related configuration files and does its magic.
I don't think that shipping the configuration from all the environments in the binary is a good practice... but mostly because I think there's a chance of starting the app with the wrong option, and suddenly the dev application tries to connect to production.
Another thing is that some of the properties such as DB connection details or payment gateway password, are kept in a different configuration file which is owned by the operations / managed services team. As we don't want developers or rogue DBAs to go ballistic with the production DB.
Our current app runs in a single JVM.
We are now splitting up the app into separate logical services where each service runs in its own JVM.
The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed.
For inter service communication we use a combination of REST, an MQ system bus, and database views.
What I don't like about this:
REST means we have to marshal data to/from XML
DB views couple the systems together which defeats the whole concept of separate services
MQ / system bus is added complexity
There is inevitably some code duplication between services
You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc.
Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?
I'm a little confused as to what you're really asking here. If you split your application up into different services running across the network, then data marshalling has to occur somewhere.
Having said that, have you investigated OSGi ? You can deploy different bundles (basically, jar files with additional metadata defining the interfaces) into the same OSGi server, and the server will facilitate communication between these bundles transparently, since everything is running within the same JVM - i.e. you call methods on objects in different bundles as you would normally.
An OSGi server will permit unloading and upgrades of bundles at runtime and applications should run normally (if in a degraded fashion) provided the OSGi bundle lifecycle states are respected.
It sounds like your team has a manual QA process and the real issue is automating regression tests so that you can deploy new releases quickly and with confidence. Breaking up the code into separate servers is a workaround for that.
If you're willing to restart the server then one approach might be to compile the code into separate jar files, and deploy a module by dropping in a new jar and restarting. This is largely a matter of structuring your code base so that bad dependencies don't creep in and the calls between jars are made via interfaces that don't change. (Or alternately, use abstract classes so you can add a new method with a default implementation.) Your build system could help by making sure that separately deployed modules can only depend on common interfaces and anything else is a compile error. But note that your compiler isn't going to help you detect incompatibilities when you're swapping in jars that you didn't compile against, so I'm not sure this really avoids having a good QA process.
If you want to deploy new code without restarting the JVM then OSGI is the standard way to do that. (But one that I know little about.)