I have a basic AWS Lambda Java Function my-function
public class Hello implements RequestHandler<Request, Response> {
public Response handleRequest(Request request, Context context) {
String greetingString = String.format("Hello %s", request.name);
return new Response(greetingString);
}
}
That Lambda function is registered in AWS well, the controller is com.tds.Hello and is related to AWS API Gateway correctly.
The final JAR is built through Maven without a problem. When I upload the JAR directly in AWS the function works well.
The problem is when I use Bitbucket Pipelines in order to do an automatic upload (and update) after committing code, the pipeline runs well with success status following This tutorial (Automating AWS Lambda deployments) but when I test/run the function in AWS I get the following error:
{"errorMessage":"Class not found: com.tds.Hello","errorType":"java.lang.ClassNotFoundException"}
Does anyone has faced this issue?
The error was when ZIP file was created and passed through Bitbucket Artifacts. I avoid to compress (ZIP) file, instead, I used the JAR directly to upload in AWS. I updated bitbucket-pipelines.yml as follow:
Old bitbucket-pipeline.yml
pipelines:
default:
- step:
name: Build and package
script:
- apt-get update && apt-get install -y zip
- zip my-function.zip target/my-function.jar
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
FUNCTION_NAME: 'my-function'
COMMAND: 'update'
ZIP_FILE: 'my-function.zip'
New bitbucket-pipeline.yml
pipelines:
default:
- step:
name: Build and package
script:
- pipe: atlassian/aws-lambda-deploy:0.5.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
FUNCTION_NAME: 'my-function'
COMMAND: 'update'
ZIP_FILE: 'target/my-function.jar'
I am trying a very basic http request with jmeter, but it seems to always get the error below. I have tried a simple get against google which is fine but the internal servers are not :
java.net.NoRouteToHostException: No route to host (Host unreachable)
I can curl the same url successfully with a 200 response, so not sure if its jmeter or java? The only thing that is unique is that our internal servers are resolving with ipv6, but I would not think that would be the problem?
Try adding the next line to system.properties file (lives in "bin" folder of your JMeter installation)
java.net.preferIPv6Addresses=true
Or pass the aforementioned property via -D command-line argument like:
jmeter -Djava.net.preferIPv6Addresses=true -n -t test.jmx -l result.jtl
References:
Java: Networking Properties
Configuring JMeter
Apache JMeter Properties Customization Guide
Overriding Properties Via The Command Line
I installed a HDP 2.5 Hadoop/Spark cluster using cloudbreak on Azure.
Everything works except the spark history server. In the log it says the default uri for the event log hdfs:///spark-history is false, the hostname is missing.
So I replaced it with a direct reference to the actual location on the azure blob storage: wasb://<host>:<port>/spark-history. This uri works when used with hdsf dfs -ls, but still the spark history server won't start. Now it complains about a class not found: Caused by: java.lang.NoClassDefFoundError: com/microsoft/azure/storage/blob/BlobListingDetails.
So, it seems it doesn't load some driver during start. I did find /usr/hdp/current/hadoop-client/lib/azure-storage-2.2.0.jar, that might be it. But I'm not sure how to make the history server load the jar during startup using the ambari config editor or whether this is even the right solution to the original problem.
The strangest thing is that Azure HDInsight uses blob storage and there the spark history server simply runs using the default hdfs:///spark-history setting.
Any suggestions on how to load the azure-storage driver or any other approach to this problem?
Thanx
I'll answer my own question. Someone on the hortonworks community forum had the answer: the spark assembly jar contains invalid storage jars. Updating the assembly jar solves the issue:
mkdir -p /tmp/jarupdate && cd /tmp/jarupdate
find /usr/hdp/ -name "azure-storage*.jar"
cp /usr/hdp/2.5.0.1-210/hadoop/lib/azure-storage-2.2.0.jar .
cp /usr/hdp/current/spark-historyserver/lib/spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar .
unzip azure-storage-2.2.0.jar
jar uf spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar com/
mv -f spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar /usr/hdp/current/spark-historyserver/lib/spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar
cd .. && rm -rf /tmp/jarupdate
So first, I want to say the only thing I have seen address this issue is here: Spark 1.6.1 SASL. However, when adding the configuration for the spark and yarn authentication, it is still not working. Below is my configuration for spark using spark-submit on a yarn cluster on amazon's emr:
SparkConf sparkConf = new SparkConf().setAppName("secure-test");
sparkConf.set("spark.authenticate.enableSaslEncryption", "true");
sparkConf.set("spark.network.sasl.serverAlwaysEncrypt", "true");
sparkConf.set("spark.authenticate", "true");
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
sparkConf.set("spark.kryo.registrator", "org.nd4j.Nd4jRegistrator");
try {
sparkConf.registerKryoClasses(new Class<?>[]{
Class.forName("org.apache.hadoop.io.LongWritable"),
Class.forName("org.apache.hadoop.io.Text")
});
} catch (Exception e) {}
sparkContext = new JavaSparkContext(sparkConf);
sparkContext.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
sparkContext.hadoopConfiguration().set("fs.s3a.enableServerSideEncryption", "true");
sparkContext.hadoopConfiguration().set("spark.authenticate", "true");
Note, I added the spark.authenticate to the sparkContext's hadoop configuration in code instead of the core-site.xml (which I am assuming I can do that since other things work as well).
Looking here: https://github.com/apache/spark/blob/master/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java it seems like both spark.authenticate's are necessary. When I run this application, I get the following stack trace.
17/01/03 22:10:23 INFO storage.BlockManager: Registering executor with local external shuffle service.
17/01/03 22:10:23 ERROR client.TransportClientFactory: Exception while bootstrapping client after 178 ms
java.lang.RuntimeException: java.lang.IllegalArgumentException: Unknown message type: -22
at org.apache.spark.network.shuffle.protocol.BlockTransferMessage$Decoder.fromByteBuffer(BlockTransferMessage.java:67)
at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.receive(ExternalShuffleBlockHandler.java:71)
at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:149)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at java.lang.Thread.run(Thread.java:745)
In Spark's docs, it says
For Spark on YARN deployments, configuring spark.authenticate to true will automatically handle generating and distributing the shared secret. Each application will use a unique shared secret.
which seems wrong based on the comments in the yarn file above, but with trouble shooting, I am still lost on where I should go to get sasl to work? Am I missing something obvious that is documented somewhere?
So I finally figured it out. The previous StackOverflow thread was technically correct. I needed to add the spark.authenticate to the yarn configuration. Maybe it is possible to do this, but I can't figure out how to add this configuration in the code, which makes sense at a high level why this is the case. I will post my configuration below in case anyone else runs into this issue in the future.
First, I used an aws emr configurations file (An example of this is when using aws cli aws emr create-cluster --configurations file://youpathhere.json)
Then, I added the following json to the file:
[{
"Classification": "spark-defaults",
"Properties": {
"spark.authenticate": "true",
"spark.authenticate.enableSaslEncryption": "true",
"spark.network.sasl.serverAlwaysEncrypt": "true"
}
},
{
"Classification": "core-site",
"Properties": {
"spark.authenticate": "true"
}
}]
I got the same error message on Spark on Dataproc (Google Cloud Platform) after I added configuration options for Spark network encryption.
I initially created the Dataproc cluster with the following command.
gcloud dataproc clusters create test-encryption --no-address \
--service-account=<SERVICE-ACCOUNT> \
--zone=europe-west3-c --region=europe-west3 \
--subnet=<SUBNET> \
--properties 'spark:spark.authenticate=true,spark:spark.network.crypto.enabled=true'
The solution was to add in addition the configuration 'yarn:spark.authenticate=true'. A working Dataproc cluster with RPC encryption of Spark can therefore be created as follows.
gcloud dataproc clusters create test-encryption --no-address \
--service-account=<SERVICE-ACCOUNT> \
--zone=europe-west3-c --region=europe-west3 \
--subnet=<SUBNET> \
--properties 'spark:spark.authenticate=true,spark:spark.network.crypto.enabled=true,yarn:spark.authenticate=true'
I verified the encryption with ngrep. I installed ngrep as follows on the master node.
sudo apt-get update
sudo apt-get install ngrep
I then run ngrep on an arbitrary port 20001.
sudo ngrep port 20001
If you then run a Spark job with the following configuration properties you can see the encrypted communication between driver and worker nodes.
spark.driver.port=20001
spark.blockManager.port=20002
Note, I would always advice also to enable Kerberos on Dataproc to secure authentication for Hadoop, Yarn etc. This can be achieved with the flag --enable-kerberos in the cluster creation command.
I'm trying jhipster with token-based authentication. It works perfectly.
Now, I want to run back-end and front-end code on different domains. How can I do this?
This is what I tried:
Run yo jhipster and select token-based authentication option:
Welcome to the JHipster Generator
? (1/13) What is the base name of your application? jhipster
? (2/13) What is your default Java package name? com.mycompany.myapp
? (3/13) Do you want to use Java 8? Yes (use Java 8)
? (4/13) Which *type* of authentication would you like to use? Token-based authentication (stateless, with a token)
? (5/13) Which *type* of database would you like to use? SQL (H2, MySQL, PostgreSQL)
? (6/13) Which *production* database would you like to use? MySQL
? (7/13) Which *development* database would you like to use? H2 in-memory with Web console
? (8/13) Do you want to use Hibernate 2nd level cache? Yes, with ehcache (local cache, for a single node)
? (9/13) Do you want to use clustered HTTP sessions? No
? (10/13) Do you want to use WebSockets? No
? (11/13) Would you like to use Maven or Gradle for building the backend? Maven (recommended)
? (12/13) Would you like to use Grunt or Gulp.js for building the frontend? Grunt (recommended)
? (13/13) Would you like to use the Compass CSS Authoring Framework? No
...
I'm all done. Running bower install & npm install for you
^C
Make two copies of the project as jhipster/backend and jhipster/frontend
Delete unnecessary files from back-end and front-end
rm -rf backend/.bowerrc
rm -rf backend/.jshintrc
rm -rf backend/bower.json
rm -rf backend/Gruntfile.js
rm -rf backend/package.json
rm -rf backend/src/main/webapp
rm -rf backend/src/test/javascript
rm -rf frontend/pom.xml
rm -rf frontend/src/main/java
rm -rf frontend/src/main/resources
rm -rf frontend/src/test/gatling
rm -rf frontend/src/test/java
rm -rf frontend/src/test/resources
Make changes in code to completely remove backend/frontend dependency
frontend/Gruntfile.js
...
var parseVersionFromPomXml = function() {
return '1.2.2.RELEASE';
};
...
browserSync: { ..., proxy: "localhost:8081" }
frontend/src/main/webapp/scripts/app/app.js
angular.module('jhipsterApp', [...])
.constant('API_URL', 'http://localhost:8080/')
.run( ... )
frontend/src/main/webapp/scripts/**/*.service.js
angular.module('jhipsterApp').factory(..., API_URL) {
return $http.post(API_URL + 'api/authenticate', ...);
}
angular.module('jhipsterApp').factory('Account', function Account($resource, API_URL) {
return $resource(API_URL + 'api/account', {}, {...});
}
// Make similar changes in all service files.
backend/pom.xml
Remove yeoman-maven-plugin
backend/src/main/java/com/mycompany/myapp/SimpleCORSFilter.java
// Copied from here: https://spring.io/guides/gs/rest-service-cors/
#Component
public class SimpleCORSFilter implements Filter {
public void doFilter(...) {
...
response.setHeader("Access-Control-Allow-Origin", "*");
...
}
}
Run
Terminal Tab #1: BACKEND
cd backend
mvn spring-boot:run
...
[INFO] com.mycompany.myapp.Application - Started Application in 11.529 seconds (JVM running for 12.079)
[INFO] com.mycompany.myapp.Application - Access URLs:
----------------------------------------------------------
Local: http://127.0.0.1:8080
External: http://192.168.56.1:8080
----------------------------------------------------------
Terminal Tab #2: FRONTEND
cd frontend/src/main/webapp
npm install -g http-server
http-server
Starting up http-server, serving ./ on: http://0.0.0.0:8081
Hit CTRL-C to stop the server
Terminal Tab #3: GRUNT
cd frontend
bower install
npm install
grunt serve
...
[BS] Proxying: http://localhost:8081
[BS] Access URLs:
-------------------------------------
Local: http://localhost:3000
External: http://10.34.16.128:3000
-------------------------------------
UI: http://localhost:3001
UI External: http://10.34.16.128:3001
-------------------------------------
Browse http://localhost:3000/#/login
Enter username:password as admin:admin
Our BACKEND tab reads:
[DEBUG] com.mycompany.myapp.security.Http401UnauthorizedEntryPoint - Pre-authenticated entry point called. Rejecting access
Apparently, I'm doing something wrong. What is it?
When requests fail due to CORS, there is no visible error on the backend. The HTTP request actually succeeds, but is blocked on the front-end side by javascript. A message like this one will appear in the JS console.
XMLHttpRequest cannot load http://localhost:8080/api/authenticate. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:9000' is therefore not allowed access.
The error message you're seeing is actually related to authentication. When you enable CORS, your JS will send ''pre-flight'' requests using the HTTP OPTIONS method. JHipster isn't configured to allow the OPTIONS method globally. I ran into this exact same problem myself while doing the same thing you did. The fix is very simple: just add this line to your com.mycompany.config.SecurityConfiguration immediately preceding (before) the first antMatchers entry.
.antMatchers(org.springframework.http.HttpMethod.OPTIONS, "/api/**").permitAll()
This will explicitly allow all requests with the OPTIONS method. The OPTIONS method is used in CORS as a way to read all of the headers and see what HTTP methods are allowed in the CORS request.
Finally, in your SimpleCORSFilter class, you should also add these headers:
response.setHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, DELETE");
response.setHeader("Access-Control-Max-Age", "86400"); // 24 Hours
response.setHeader("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, x-auth-token");
Separating frontend and backend in JHipster application is quite simple. Please follow the steps mentioned below if you want to setup frontend and backend applications separately and individually using JHipster:
Create two directories for frontend and backend applications
mkdir frontend
mkdir backend
change your directory to frontend and run the JHipster command to create just frontend module
cd frontend
jhipster --skip-server --db=sql --auth=jwt
if all works fine, run npm start command to run your frontend application.
I'm using mysql for db and JWT for auth and if you want to use websockets you add: "--websocket=spring-websocket"
Now, change your directory to the backend and run JHipster command to create just backend module
cd .. //as we are ing backend directory currently
cd backend
jhipster --skip-client
Run your backend application as you run your spring boot application
Now, your frontend and backend application are running separately and individually and frontend is coordinating with the backend application via REST API calls.
In addition to xeorem's answer above, I also had to modify the parse-links-service.js to handle the preflight OPTIONS responses, which don't have the "link" response header:
var links = {};
if (!angular.isObject()) {
// CORS OPTIONS responses
return links;
}
if (header.length == 0) {
throw new Error("input must not be of zero length");
}
// Split parts by comma
var parts = header.split(',');
...
Instead of adding API_URL to app.js, modify Gruntfile.js and add the API_URL to the ngConstants block for both DEV and PROD environments.
You can use CORS filter from Tomcat. Put Tomcat dependency in the pom.xml:
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-catalina</artifactId>
<version>8.0.15</version>
<scope>provided</scope>
</dependency>
Use whatever version of Tomcat you use.
Add CORS filter initialization in WebConfigurer:
private void initCorsFilter(ServletContext servletContext, EnumSet<DispatcherType> disps) {
log.debug("Registering CORS Filter");
FilterRegistration.Dynamic corsFilter = servletContext.addFilter("corsFilter", new CorsFilter());
Map<String, String> parameters = new HashMap<>();
parameters.put("cors.allowed.headers", "Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization");
parameters.put("cors.allowed.methods", "GET,POST,HEAD,OPTIONS,PUT,DELETE");
corsFilter.setInitParameters(parameters);
corsFilter.addMappingForUrlPatterns(disps, false, "/*");
corsFilter.setAsyncSupported(true);
}
put this line in WebConfigurer.onStartup(...), put it as close to the top as possible.
...
initCorsFilter(servletContext, disps);
...
I'm using Jhipster version 4.14.5
I have copied the following files to a project-forntend folder:
.bowerrc
gulp
pom.xml
yarn.lock
gulpfile.js
readme.md
bower_components
.gitattributes
src/main/web
bower.json
.gitignore
package.json
target/www
Then ran:
yarn install
bower install
gulp install
Then changed to the gulp/config.js file to :
apiPort: 8081
uri: 'http://localhost:'
Then started the project by running:
gulp serve