After reading the following post I have a few questions:
https://spring.io/blog/2014/08/05/extending-spring-cloud
Imagine that I have implemented my own Spring Cloud (Cloud Platform Extensibility), and after testing in my local machine I want to deploy in different environments.
Assume that:
My environments have a Docker installation.
I do not want to install the Cloud Foundry architecture in them.
My questions are:
What are the requirements for the different environment to work with my own Spring Cloud? i.e. must I install Spring Cloud Foundry architecture in all the environment machines?
Is Spring Cloud Foundry archictecture compulsory though I have implemented my custom Spring Cloud?
Must I use commands like "cf" to upload and deploy the services?
Many thanks.
Regards,
Paco.
That is an old blog post and I feel it doesn't accurately describe Spring Cloud as it stands today. It refers to the since renamed Spring Cloud Connectors project.
Spring Cloud, built on top of Spring Boot, provides developers an easy way of building "cloud native" and "12 factor" applications. That essentially boils down to the common patterns found in modern applications such as centralized configuration, service discovery, circuit breakers, etc. This is cloud agnostic and works well in a variety of environments including AWS and GCP.
So no, Spring Cloud isn't really directly related to Cloud Foundry, however it works nicely there as it does many other places.
You probably solved your problem, but in case not and for the sake of others i'll post an answer. You can deploy a spring cloud application on docker swarm using docker compose v3 . As shown in this repository , the command docker stack deploy -f all-in-one.yml springcloud deploys the resource specified in all-in-one.yml on docker swarm. You can take a look at how docker stack works in this documentation.
Related
I'm trying to bundle ReactJS and Spring Boot API together and build one fat jar. In every tutorial I read, I'm told to put the localhost API URL as a proxy in package.json of the React app like below.
"proxy": "http://localhost:8080"
As I obviously don't have PROD deployment experience with this, is this the way to go when you are deploying in PROD? Else, please guide me in the right direction. I couldn't find the answer anywhere.
Also, any cons in doing so in a medium sized project with two developers? Appreciate any input.
The "proxy" field should only be used in development environment when the Webpack dev server is first in line(to enable the Hot-Reload feature)
Here is a guide from 2018:
spring + react guide
regardless there are two main way of hosting the react app:
inside the spring boot Jar a static resource(you can use frontend-maven-plugin to run yarn/ npm again see the guide),
the advantages of this method is security, you don't need CORS enabled to serve the page.
the disadvantages is convenient this solution require more code, also the spring boot server handles UI serving to the client that requires extra calls to the server(spring first approach)
the other option is to host it in a hosting service like amazon S3 and then it will be hosted not in spring but in s3 and will be the first in line(UI first approach), you will need to enable CORS in spring boot app, but this is a more continent solution.
ps. I would read some guides first, it would help you with general understanding
I have multiple spring boot projects that a running on windows server. For now I am deploying each project as a windows service because we cant use docker.
Now I was thinking about to reduce the amount of windows services that I have but not change the microservice architecure.
One alternative is OSGi which seems to be very nice but as I have seen it is not recommend to use spring boot apps with OSGi.
The next alternative I was thinking about was to create my own java "controller app" that then can start/stop the other microservices. So only the "controller app" has to be deployed as a windows service.
Is there a better alternative instead of creating a own "controller app"? Docker would be nice but unfortunately we canĀ“t use it. Or should we maybe try to run our spring boot services with OSGi?
I am having a java app which I am planning to migrate to Pivotal Cloud Foundry. The application uses JMX to change of the properties at runtime. Is it possible to retain the same architecture when I migrate the app to PCF or should I explore a different approach?
are you using Spring Boot in your Java app? If so, you can use JMX features with Actuator. Jolokia helps you to do this via JMX over HTTP.
Please refer: Spring Boot JMX Management
If this is a traditional Java App, you have pushed into PCF, you can use Java build pack features to enable JMX.
Please refer: Enable JMX port via Java Build Pack
Please try and let us know how it goes.
For a PCF app, the cloud environment should provide dependencies needed for your app. You can inject these dependencies for runtime in various ways, for instance, provide environment settings.
If you need say credentials at runtime, you can look at Spring Cloud Services, and the Config server. If you are looking for other services, you can use Service registry and discovery (based on Netflix Eureka component) within Spring Cloud Services.
It all depends on your use case.
Can you elaborate more on "change properties at runtime"?
I want to run and deploy a java rest API code on Bluemix. This is more to understand the Devops capabilities in conjunction with API management.
I tried to use this: http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring/
But could not push it to Bluemix. May I get some support?
Update:
When I push it to Bluemix, I get an error saying it could not find appropriate runtime.
Reading your comments you are searching for some pointers to create a starter Java REST application (possibly integrating a delivery pipeline).
You can start creating an application on Bluemix using the Liberty for Java runtime. Then you can, from your application dashboard, click on "Add Git" to create a Git repository on IBM Bluemix DevOps Services (IDS). Now you have your starter application running on Bluemix and its code hosted on IDS. You can edit the code directly on the Web IDE of IDS (clicking on "Edit Code") and from there push the new versions of the applcation on Bluemix or you can clone the repository on your local environment (for example using the Eclipse Tools for Bluemix) and deploy directly from your machine to Bluemix.
Using the first option you will be able to quickly setup a delivery pipeline using the "Build & Deploy" button, and use the DevOps capabilities of IDS. The Build & Deploy feature, also known as the pipeline, automates the continuous deployment of your projects. In a project's pipeline, sequences of stages retrieve input and run jobs, such as builds, tests, and deployments.
To add REST capabilities to the sample application you can for example use JAX-RS 2.0. Take a look here.
Javaee jax-rs REST API starter
Use my java REST API starter for bluemix. This uses javaee + jax-rs + swagger
Just fork it, run pom.xml to generate war and push the war file to bluemix. Works like a charm
https://github.com/sanketsw/jax_rs_REST_Example
Spring boot REST API starter for bluemix
if you want a spring boot REST API starter, you can use the following boilerplate. This is a netflix eureka client but you can ignore eureka annotations. The REST API will work seamlessly anyway
https://github.com/sanketsw/Netflix_Eureka_Client_Hello_World
Another cleaner springboot REST API starter is here: https://github.com/sanketsw/SpringBoot_REST_API
Is it possible to have a Puppet setup where you use JClouds to instantiate new virtual machines on your cloud, but then have their configuration (software stack) defined and implemented through Puppet?
Or is there something inherent to the nature of Puppet that prevents its use on a cloud provider like AWS, RackSpace or Heroku?
There are two separate issues involved here: bootstrapping puppet on cloud nodes and orchestrating between them (e.g. configure application servers with the ip address of the database).
For bootstrapping, there are many tools available; AWS CloudFormation can be integrated using user-data, CloudInit (default on Ubuntu, ec2-linux AMIs and on many EL images) supports bootstrapping puppet out of the box. Puppetlabs offer cloud provisioner, and lastly, there is Cloudify.
Other then CloudFormation and Cloudify, most tools don't manage your stack after bootstrapping and do not offer orchestration. CloudFormation itself offers only boot time orchestration and it is pretty lame. Puppet itself is lacking with respect to orchestration (compared with Chef's excellent search feature for example)
Cloudify provides ongoing stack management and fancy orchestration exposed via Puppet integration module. This gives you the ability to pass information between nodes (for service discovery, credential distribution, etc.) and bootstrap your entire system with one command. Plus it supports most clouds in existence.
Puppet is an excellent choice for configuring your cloud infrastructure, most cloud providers allow you to call a script on first boot (Ec2 has user-data), you can make this script insert some node type data then clone a puppet repo and apply it, if you don't want to run a puppet master service (which can be a hassle to setup and maintain), you can also use git to push updates to the configuration and even generate a new image on config changes to allow rapid node launches with you latest setup. Check out this blog
The only open source tool I'm aware of that directly announces puppet + jclouds integration is Apache Whirr, but this is mainly via puppet apply vs a server.
As puppet has a restful interface now, it should be possible to create native puppet support in jclouds as we do for chef. You'd be welcome to help define that :)
In the mean time, you can either use something like whirr, or craft instructions and supply them as a shell script via jclouds TemplateOptions.runScript during node creation, or later using the submitScriptOnNode command (both of which currently operate via ssh).