I have some Java applications running on Google Compute Engine instances in a Container Engine cluster. I upgraded the cluster to newest version (1.7.8) and changed the node images from the container optimised OS to Ubuntu. Now my pods are crashing when trying to connect to the cloud sql database with this error message:
The Application Default Credentials are not available. They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
See https://developers.google.com/accounts/docs/application-default-credentials for more information.
The service account is the same as before the upgrade with the scope https://www.googleapis.com/auth/sqlservice.admin
Does anyone have an idea why I'm getting this error now? Is the best solution is to create the environment variable?
Ideally you should be using GOOGLE_APPLICATION_CREDENTIALS environment variable. Otherwise you are using the "Compute Engine default service account" of the VMs.
See this tutorial for the best practices: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Related
We have a Windows file share with JPEGs that I'm trying to access from Pivotal Cloud Foundry via a Java Spring Boot REST API. What are the steps needed in both PCF and Java itself to achieve this?
Do I have to mount the drive in PCF first, then I can use the standard java.io libraries somehow to access this file? As of now we are simply wanting to read the JPEG files into a BufferedImage and return them as Base64 after some graphics manipulation (which I know how to do), but I'm stuck on getting my head around what to do for this within the realm of PCF. Obviously, it works just fine on my Windows development machine where my logged-in user also has credentials to the file share and no need to mount it or do anything special.
I keep reading about SMB and a JCIFS library online, but still not sure if this is what I need or how to fully apply this with the technologies at hand.
Starting in Pivotal Cloud Foundry 2.4, you are able to enable SMB volume services. This allows you to cf create-service a volume service that points to your SMB server. You can then bind that to your app, and when your app starts the platform will mount the SMB volume to the mount point you specify. At that point, your app would just need to know the mount point, like /smb or /files, so it could use standard Java I/O to read in the files.
If you are a platform operator, the instructions for enabling SMB volume services are here.
https://docs.pivotal.io/pivotalcf/2-4/opsguide/enable-vol-services.html#smb-enable
If you are a developer, the instructions for using it are here.
https://docs.pivotal.io/pivotalcf/2-4/devguide/services/using-vol-services.html#smb
If you are on an older version of PCF or your operator has not enabled this feature, then you would need to code something into your app to directly access your SMB volume. There are multiple libraries capable of doing this (and there could be more than in this list).
https://www.jcifs.org/
https://github.com/hierynomus/smbj
https://github.com/AgNO3/jcifs-ng
I can't recommend a particular one, so you'd need to evaluate and figure out which one works for you.
Do keep in mind that regardless of what you pick, you need to have network access to the SMB server from your application running on PCF. This means the IP needs to be routable and not blocked by a firewall. If you're unable to connect you may also want to review network access with your platform operator and make sure nothing is blocking the connection (firewall, application security groups, etc..).
Hope that helps!
I'm hosting a web application on Amazon web services on two different elastic beanstalk environments (i.e. two different RDS instances) one for test and another for production. each time I deploy the application I need to change the connection URL according to the environment.
Is there anything I can do to automate this process? a condition for checking the environment and connecting using the right URL or something?
You should be using Elastic Beanstalk environment variables to store the database connection URL, and any other environment specific settings. Your code would simply pull the value out of the environment variable, instead of having to do some sort of check.
I'm trying to learn Google App Engine, though they're making it very difficult.
When I deploy an application through windows console or through Eclipse's GAE plugin, it works fine, but what am I deploying the application to? What web server/container is being used? Should I be able to see the deployed files in my google dev console?
Also, when I use the "click-to-deploy" feature to deploy an instance of tomcat, it sets it to a new URL as an "external" ip address. Why is this not being set to my project's appspot URL? Is this an entirely different server created in addition to the default one that is created automatically?
Searching for GAE info on the web just returns millions of pages about their offerings, but nothing to explain the behind the scenes stuff.
Thanks!
Let me break down the questions:
When you deploy through the console or the Eclipse plugin, you are deploying to the App Engine runtime. You can see what is running by going to the App Engine section of the Google Developer's Console.
This app is served from the .appspot.com domain as well.
Click-to-deploy is not App Engine, but Compute Engine. Compute Engine is more akin to a VM in the cloud. You get SSH access and a Linux or Windows operating system, but don't get all the auto-scaling and things built into App Engine. You would access this through the IP address, not the appspot URL.
I hope this helps!
I'am developing a java servlet application, and tesing it on Eclipse + Apache Tomcat (refer: http://www.vogella.de/articles/EclipseWTP/article.html#overview_wtp).
The application is now tested on the localhost and accesed by any clients on the same LAN.
Now, I need to deploy it into the web server, where everyone from anywhere can access this servlet.
Coud you guide me the way that I've to do to archive this task.
You need to have a computer accessible to everyone - i.e. placed on the internet and not behind a firewall - with the appropriate software installed (and hardened against hacker attacks).
If you do not have such a computer, you can have a look at the Google Application Engine which allows you to deploy Java web applications (with some additional restrictions) to the Google cloud. This is free for low-volume applications.
Yes, you can do it by deploying your application in the Cloud Instance. Since we cannot able to make our server instance or computer to be run always(We may come across internet connection problems, Power Fluctuation, etc.,), We have lot of problems while making our instance public(In Security perspective too). So it is better to use cloud instances.
We have many Cloud Service Providers such as AWS by Amazon, Google Cloud, Microsoft Cloud, etc.,
Take a look on this List of Cloud Services Providers.(You have links for all top 10 providers)
We need to have a context path to deploy the Java application and access it through the browser. We have nearly 10 applications on Oracle Application server. We would like to work our applications without context path. i.e.; we would like the application server to look at the corresponding application based on the domain name.
I know this can be done as Google app engine is doing the same when users deploy their applications. Context path of these application will be just "/".
Any ideas on setting this up on Oracle app server?
I'm assuming that the Oracle Application Server being referred to, is the older Oracle Containers for Java (OC4J).
With OC4J, you'll need to put OHS (Oracle HTTP Server) or any compatible HTTP Server (Apache 1/2 works) in front of OC4J, and configure the HTTP Server to forward requests to OC4J (there are mod_oc4j plugins available for the same). Additionally, you'll have to configure the HTTP Server to serve multiple virtual hosts.
The same information holds good even for Oracle WebLogic Server.
You can find more information on the same in Oracle HTTP Server Administrator's Guide. The guide to version 10.1.3.1 is available here; you might need to determine the appropriate version of OHS for your version of OC4J/WLS.
You could ask additional questions on OHS/Apache configuration on ServerFault.