I'm developing an application that I'm thinking of hosting on GAE, hopefully within the free tier. It's an "own-time" project and I find the GAE documentation pretty incomprehensible when it comes to working out which products are available, how much they cost, and how they should be used.
The app relies on users being able to upload images with some meta data. The meta data needs to be searchable and allow the images to be displayed.
My problem comes with where to save the images. I'm storing the metadata in the DataStore. Google seem to imply that I should store the images in either Cloud Storage or the Blob Store, with a preference for Cloud Storage. This seems to be chargeable.
I also see mention of the Image Service - is this something else I should consider?
From Default Google Cloud Storage bucket:
Applications can use a Default Google Cloud Storage bucket, which has
free quota and doesn't require billing to be enabled for the app. You
create this free default bucket in the Google Cloud Platform Console
App Engine settings page for your project.
So you can use GCS without being charged as long as you don't exceed the free quota.
You also probably want to stick with the standard environment for its free quota. From App Engine Pricing:
App Engine applications run as instances within the standard
environment or the flexible environment.
Instances within the standard environment have access to a daily limit
of resource usage that is provided at no charge defined by a set of
quotas. Beyond that level, applications will incur charges as
outlined below. To control your application costs, you can set a
spending limit. To estimate costs for the standard environment,
use the pricing calculator.
For instances within the flexible environment, services and APIs are
priced as described below.
And for images-related options, from Overview of Images API for Java:
Java 8 on App Engine supports Java's native image manipulation
classes such as AWT and Java2D alongside the App Engine
Images API.
Related
I have a j2ee Application running on AWS,
User need to upload an image or PDF to the application for internal use,
What is the right way to get/create a path the AWS to store the images?
Images/pdf will not be exposed for anyone to download,Its just to the j2ee application
I was searching and found "buckets", but buckets are exposed to the outside world for manual upload, so i am not sure if this is the right way to go
You can implement a file upload feature in your application (a page that the user can access) which streams the file to memory within the application (example here for Spring web application). Once the image is in memory, you can store it in a secured AWS S3 bucket with the AWS SDK.
In AWS there are multiple storage options available. But the best option would be to use a S3 bucket. By default the S3 bucket is private and not open to outside world. You can manage permission to the bucket and authorize only your application can upload files to there and view them. There are couple of benefits when using S3 to keep file uploads.
Extremely high durability of 99.999999999
High availability 99.99
High scalability
Unlimited storage
Low cost and event lower cost in archiving data with life cycle rules
Versioning
& etc.
Also you application can scale independently without limiting with storage.
I am currently working on different aspect Google App Engine and still in study phase and build some small apps and deployed it on cloud. Recently when i was installing a command line to for cloud storage(i.e. gsUtil) I encountered versioning support on cloud storage and was able to retrieve old objects or deleted objects through gsUtil . So building a document management system on GAE is good idea with Google cloud storage or I should be using Google drive SDK ?
Please guide me on this problem .
Thanks in advance
Completely different products for completely different use cases.
Google Cloud Storage is a storage on cloud, no more abstractions. If you want to build a document management system from scratch, you can prefer it as the storage provider.
If you build an app on the top of Google Drive, you inherit a file system abstraction, user management, a permissions model and etc. But you don't own the users, neither their drives. Additionally, Drive's quota management is fined tuned for "per-user" usage. Most people think creating a single Drive account and logically share it among their users on the application level will work. It's unlikely to scale due to the quota limitations.
I am about to develop my masters project using Flex as front end, BlazeDs, Java Web Services and MongoDB in the backend. I am looking to deploy and manage it on a cloud. (The application analyzes financial data from various sources, I will need to query multiple endpoints for news articles and DB for processing)
It is my experiment to usage of cloud rather than deploying on my local for demo and presentation purposes.
I saw heroku (http://www.heroku.com/), but I am not sure if it allows flash.
Please suggest a cloud application platform which allows Flex, BlazeDs, Java Web Services and MongoDB?
Amazon Web Services is a good place to start. You can have a instance ready in like 15-30min from signing up. If you are just experimenting, you ought to try to get the Amazon Linux Image (AMI) up and running. Scour the net on HOWTO set up Tomcat, for your requirements it might be too much to go J2EE, but you might know better.
But a word of advice, it's better to get your application working on a local machine first. Then drop the programmer hat and put on the deployment hat 100% cause it's a b!tch configuring deployment environment for Tomcat configurations, Blaze DS, Mongo's failover servers, load balancers and all kinds of non-programming tasks. You will want to work your development stack close to home so you can diagnose quickly.
Cloud business is great only when you want 1) Not use your home PC and bandwidth as a server 2) You want to have global mirror points to your application so that user's latency in one area of the world is not slower than another part of the world 3) You want to distribute computing load burden on one application across many instances of the same application.
Clouds are relatively cheap to deploy but if you got an application that hording GB's of bandwidth and storage, be prepared to fork over $1000's+ in costs. You can save money by going with an OS with no licensing costs to get a better rate.
Amazon Cloud Services (AWS) has provided the ready to use Library to make calls to SDB, S3, SNS etc right from your Android app. This makes it really easy for a mobile developer who is not familiar with web services and web applications to create a completely scalable cloud based app.
We give the Amazon Access Credentials in these API calls to connect to our cloud Account; My question is:
How do I effectively use Key rotation in the app, since I would be distributing the app, once the change in key could mean a period disruption for the existing users.
Would hard coding the Amazon Access Credentials inside the code (as a field Constant etc) make it vulnerable to extraction? Via decompiling etc.?
I talked to the Amazon Advocate for our region and he told that Amazon client library is not designed for such a purpose.
It could be used in for in-house apps (not being published), like client-demo apps.
If you're bundling the Credentials with an app to be published in open market (not recommended), use IAM and create a separate credential with with restricted access.
If you're building an app like Instagram, you may have to setup a web server to proxy your calls to Amazon (effectively making the client library useless).
Obviously, I was not very convinced. I think an entire client library to Amazon communication (bypassing the need for a webserver) could be a great advantage for Mobile devs.
Re:
Would hard coding the Amazon Access Credentials inside the code (as a field Constant etc) make it vulnerable to extraction? Via decompiling etc.?
Yes, by looking for strings and patterns in the binary. Also decompiling, but that'd often not be necessary.
The first question is, what sort of threats are you trying to protect against? Governments? Paid hackers? Or you just want to make it not easy to gain access other than via the app?
Limit the access the keys have to just the data that the app needs.
Store the keys in the app in several pieces. Modify them in some way (eg ROT47), then re-combine when sending to the service.
Don't put all of the key information into the app. Require use of another security device such as the Amazon MFA
Install monitoring to detect unusual patterns of access that could indicate access from outside of the app.
I am developing a Java application using Google App Engine that depends on a largish dataset to be present. Without getting into specifics of my application, I'll just state that working with a small subset of the data is simply not practical. Unfortunately, at the time of this writing, the Google App Engine for Java development server stores the entire datastore in memory. According to Ikai Lan:
The development server datastore stub is an in memory Map that is persisted
to disk.
I simply cannot import my entire dataset into the development datastore without running into memory problems. Once the application is pushed into Google's cloud and uses BigTable, there is no issue. But deployment to the cloud takes a long time making development cycles kind of painful. So developing this way is not practical.
I've noticed the Google App Engine for Python development server has an option to use SQLite as the backend datastore which I presume would solve my problem.
dev_appserver.py --use_sqlite
But the Java development server includes no such option (at least not documented). What is the best way to get a large dataset working with the Google App Engine Java development server?
There's no magic solution - the only datastore stub for the Java API, currently, is an in-memory one. Short of implementing your own disk-based stub, your only options are to find a way to work with a subset of data for testing, or do your development on appspot.
I've been using the mapper api to import data from the blobstore, as described by Ikai Lan in this blog entry - http://ikaisays.com/2010/08/11/using-the-app-engine-mapper-for-bulk-data-import/.
I've found it to be much faster and more stable than using the remote api bulkloader - especially when loading medium sized datasets (100k entities) into the local datastore.