I'm facing issue in uploading the maven deployment package to Amazon s3.
From Eclipse, I'm able to generate the .jar file successfully, however I'm facing issue in uploading to server.
Here is my Java code:
package main.java.mavantestproj;
import java.util.Map;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
import com.amazonaws.services.lambda.runtime.Context;
public class LambdaFunctionHandler {
public String handleRequest(Map<String,Object> input, Context context) {
context.getLogger().log("Input: " + input);
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new ProfileCredentialsProvider("mytest"));
client.setRegion(com.amazonaws.regions.Region.getRegion(Regions.US_WEST_2));
client.describeTable("ProductCatalog");
// TODO: implement your handler
return null;
}
}
in target folder i have got 2 jar's. ie lambda-java-example-1.0-SNAPSHOT.jar & original-lambda-java-example-1.0-SNAPSHOT.jar
In this first jar is 35MB & second one is 4KB. I'm not getting which one to upload to S3 to run my lambda function.
You definitely need the large "uber-jar" so your dependency classes will be included, but there is an alternative way to package things up for AWS-Lambda using the Maven assembly plugin instead of the Shade plugin. You end up with an AWS lambda deployment package in .zip format instead of a single .jar file. It will look a little more like a JEE .war file with all the original .jar dependencies kept intact, and you can include other stuff like properties files that end up unpacked in the file-system where the lambda runs (which may be a little easier to find and load in your code). If you're interested in the details, there's a blog post about it here: http://whirlysworld.blogspot.com/2016/03/aws-lambda-java-deployment-maven-build.html Also, packaging a Lambda function app this way makes it WAY easier to peek into the zip file and figure out which dependency jars are being included, and then identify the ones you might be able to exclude.
This still doesn't get Maven to handle the actual deployment of the package to AWS (create or update). Deploying, and capturing the deployment info (ARN, API-gateway app-id-url, etc.), seems to be the next thing for which Amazon hasn't provided a very clear answer or solution.
The larger JAR file that is being generated includes all of the library dependencies baked in. This is the one you will want to upload to S3 for use by AWS Lambda as these dependencies are required to run.
If you want to make this file smaller you can ensure you are only including libraries you need and remove any unnecessary ones. A common way to do this is with the AWS SDK only include the libraries for the specific services you need to call such as DynamoDB instead of including the entire AWS SDK.
It seems standalone jar file built using shade plugin is sufficient as per this AWS documentation
Related
I am using Hot Code Replace feature when Tomcat is running from eclipse and it works great.
But, how can I do this manually when Tomcat is running outside eclipse?
After some searching, I have found that I need to use an agent like HotswapAgent. But, they are using this agent with modified JDK called DCEVM. I don't want to use modified JDK. I want to achieve the same thing with OpenJDK.
I know that modification will be limited to method body only but, that's not a problem for me. How can I achieve the exact same thing eclipse is doing for Hot Code Replace for an externally running Tomcat without using IDE?
Edit : Eclipse example is just to clarify what I want to achieve. I do not want to use eclipse at all. I just want to do Hot Code Replace in an application running in Tomcat.
Yes, it's possible to perform Hot Code Replace in a running JVM. This involves several steps.
Prepare (compile) the new version of classes you want to replace. Let's say, you want to replace org.pkg.MyClass, and the new version of this class is located at /new/path/org/pkg/MyClass.class
Create a Java Agent that uses Instrumentation API to redefine the given class. Here is how the simplest agent may look like:
import java.lang.instrument.*;
import java.nio.file.*;
public class HotCodeReplace {
public static void agentmain(String args, Instrumentation instr) throws Exception {
Class oldClass = Class.forName("org.pkg.MyClass");
Path newFile = Paths.get("/new/path/org/pkg/MyClass.class");
byte[] newData = Files.readAllBytes(newFile);
instr.redefineClasses(new ClassDefinition(oldClass, newData));
}
}
Compile the above agent and pack it into .jar with the following MANIFEST.MF
Agent-Class: HotCodeReplace
Can-Redefine-Classes: true
The command to create HotCodeReplace.jar:
jar cvfm HotCodeReplace.jar MANIFEST.MF HotCodeReplace.class
Load the agent .jar into the target JVM. This can be done with Attach API or simply with jattach utility:
jattach <pid> load instrument false /path/to/HotCodeReplace.jar
More about Java agents »
We have a Java program that relies on a specific library. We have created a second library that has a very similar API to the first library, however, this one is made in-house and we are ready to begin testing it.
To test, we would like to replace the jar in the Java program with the jar of our new library. The issue is that the new library does not have the exact same namespace, so the import statements will not align. For example,
Java program
import someLibrary.x.y.Foo;
public class Main {
public static void main(String[] args){
new Foo().bar();
}
}
New Library has the same API but different namespace
anotherLibrary.x.y.Foo;
Question: How can I use the classloader or another tool to run a Java program but replace a dependency and redirect import statements to another namespace?
[EDIT] - We do not have access to the Java program's source code. We can have this program changed to use our new library but we do not want to do that until after it has been thoroughly tested.
The only solution I can think of would involve writing a custom ClassLoader that would alter the bytecode to change the method references and field references to change the class name.
How about the straightforward solution:
Create a branch of your main program (in git or whatever source control tool you use):
Apply all the changes required to work with the new library (change all the imports)
Deploy on test environment and test extensively
Merge back to master when you feel confident enough
Another solution could be:
Create a branch out of new library
Change the imports so that it will look exactly as the old one (with all the packages)
Substitute the old library with a new one in your application
Deploy on test environment and test extensively
When you're ready with the new library deploy to production and keep working in production for a grace period of month or something (until you really feel confident)
In a month change back all the imports (basically move from branch with the "old" imports to the branch with your real imports in both library and application.
Update
Its also possible to relocate packages of your version of the library automatically if you use maven.
Maven shade plugin has relocate goal that can be used to "relocate" the packages of your library to be just like packages of existing library. See shade plugin's documentation
I’m writing a Java-programm for school that also uses Thrift.
The problem is not so much the general programm/programm-logic itself, but just importing Thrift (to use it in a specific part).
My basic -possibly wrong- understanding is that you write the programm-code (here empfaenger.java), then you import Thrift into this file by adding the needed import-statements, e.g.:
import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TSocket ;
import org.apache.thrift.transport.TTransport;
and adding a file in the same directory from which they can actually can be imported, in this case libthrift-0.13.0.jar.(1) Then you later also import a compiled .thrift-file with the language-specific realization oft he IDL-code, that iself again imports some Thrift-classes. This file is here named syncautohersteller.
EDIT: The approach with the .jar-file was recommended by the prof.
Current project-structure (as seen in InteliJ):
The problem is now just that all the Thrift import-statements all throw errors, e.g.
empfaenger.java
java: package org.apache.thrift does not exist
syncautohersteller
package javax.annotation does not exist
so clearly i’m doing something wrong.
Does anybody know how to fix this?
(1) I got the file from the Thrift folder (Home/Downloads/thrift-0.13.0/lib/java/build/libs and then the first of the three .jar-files in the folder) after installing Thrift using ./configure, sudo make and sudo make install and trying to verify by running “~/Downloads/thrift-0.13.0$ thrift –version” with result
Thrift version 0.13.0
In IntellJ Idea to add external Jars you can find some useful information in this question: Correct way to add external jars (lib/*.jar) to an IntelliJ IDEA project.
I suggest you to manage the project's dependencies through Maven, which help you to add JAR dependencies to the classpath in a simpler way.
First, you have to convert your project into a Maven project as explained in IntelliJ Idea documentation.
Then you can follow these steps:
Go to Maven repository website
Search for Thrift
Select first result
Select the version you need
Copy the maven dependency
org.apache.thrift
libthrift
0.13.0
Add maven dependency to your pom.xml file
Execute a mvn clean install, after clicking the following button in IntelliJ
This process will help you and people which work with you to manage in a simpler way the dependencies of the project.
You can do it the simplest way with the Gradle, something like this:
build.gradle.kts:
repositories {
mavenCentral()
}
dependencies {
implementation("org.apache.thrift:libthrift:0.13.0")
}
I'm presently writing an ORDS plugin which is intended to filter certain requests. I'm not quite about to get the filtering working, so I decided to follow Oracle's provided instructions for their Plugin API.
I've configured much of the build with a Gradle task which automatically:
Downloads the WAR
Adds the plugin JAR (also previously built with Gradle) to ORDS
Ensures that the configdir is set appropriately
Effectively, this is the automated equivalent to me running:
# Assuming the JAR is cURL'd in from somewhere...
java -jar ords.war plugin build/myPlugin.jar
java -jar ords.war configdir /home/makoto/ords-configuration
...and I deploy this to my local IntelliJ instance.
Here is what my servlet looks like. It's pretty basic.
import oracle.dbtools.plugin.api.di.annotations.Provides;
import oracle.dbtools.plugin.api.http.annotations.Dispatches;
import oracle.dbtools.plugin.api.http.annotations.PathTemplate;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
#Provides
#Dispatches(#PathTemplate(("/plugin/servlet/")))
public class TestServlet extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
response.getWriter().println("this worked?!");
}
}
I'm led to believe by the documentation that I should be able to access it at http://localhost:8080/ords/my_schema/plugin/servlet/, but that doesn't seem to be the case. I'm instead greeted with a 404:
DispatcherNotFoundException [statusCode=404, reasons=[]]
at oracle.dbtools.http.entrypoint.Dispatcher.choose(Dispatcher.java:87)
at oracle.dbtools.http.entrypoint.Dispatcher.dispatch(Dispatcher.java:98)
at oracle.dbtools.http.entrypoint.EntryPoint$FilteredServlet.service(EntryPoint.java:240)
at oracle.dbtools.http.filters.FilterChainImpl.doFilter(FilterChainImpl.java:73)
at oracle.dbtools.url.mapping.RequestMapperImpl.doFilter(RequestMapperImpl.java:125)
at oracle.dbtools.url.mapping.URLMappingBase.doFilter(URLMappingBase.java:103)
at oracle.dbtools.url.mapping.filter.URLMappingFilter.doFilter(URLMappingFilter.java:148)
at oracle.dbtools.http.filters.HttpFilter.doFilter(HttpFilter.java:47)
at oracle.dbtools.http.filters.FilterChainImpl.doFilter(FilterChainImpl.java:64)
at oracle.dbtools.http.cors.CORSResponseFilter.doFilter(CORSResponseFilter.java:83)
at oracle.dbtools.http.filters.HttpResponseFilter.doFilter(HttpResponseFilter.java:45)
at oracle.dbtools.http.filters.FilterChainImpl.doFilter(FilterChainImpl.java:64)
at oracle.dbtools.http.errors.ErrorPageFilter.doFilter(ErrorPageFilter.java:94)
at oracle.dbtools.http.filters.HttpFilter.doFilter(HttpFilter.java:47)
at oracle.dbtools.http.filters.FilterChainImpl.doFilter(FilterChainImpl.java:64)
at oracle.dbtools.http.auth.ForceAuthFilter.doFilter(ForceAuthFilter.java:44)
at oracle.dbtools.http.filters.HttpFilter.doFilter(HttpFilter.java:47)
at oracle.dbtools.http.filters.FilterChainImpl.doFilter(FilterChainImpl.java:64)
at oracle.dbtools.http.filters.Filters.filter(Filters.java:47)
at oracle.dbtools.http.entrypoint.EntryPoint.service(EntryPoint.java:82)
at oracle.dbtools.http.entrypoint.EntryPointServlet.service(EntryPointServlet.java:49)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at oracle.dbtools.rt.web.HttpEndpointBase.dispatchableServices(HttpEndpointBase.java:116)
at oracle.dbtools.rt.web.HttpEndpointBase.service(HttpEndpointBase.java:81)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
-- snip --
What am I missing? I'm unclear as to what should be a very basic servlet - which is virtually analogous to the "Hello World!" example they have provided - is simply not registering appropriately.
Note:
The schema is enabled for ORDS.
This error happens with both containers I've used; Glassfish and Tomcat.
I am not using APEX, which seems to be a common add-on for this product. I'm intending to use ORDS as a RESTful provider for my data.
The trailing slash in the #Dispatches path doesn't seem to have an effect; if it's removed or if it's present the issue remains.
I am looking for authoritative answers or insights as to what could be going on here. Guesses and shots in the dark do me no good, as I've been tinkering with this myself, and there's a very good chance that our tinkering paths would have overlapped.
As loath as I am to add pictures to any question, BalusC suggested that I inspect the contents of the JAR to ensure that there's a specific providers file contained within.
From this screenshot, there appears to be two...
...and their contents are the same...
com.foo.bar.baz.bing.servlet.TestServlet
oracle.dbtools.plugin.api.di.AnnotationsProvider
...but when I go to extract the JAR and inspect the file, it only contains the AnnotationsProvider line.
oracle.dbtools.plugin.api.di.AnnotationsProvider
But wait! It gets weirder!
When I mount the JAR to extract individual files, I see lots of duplicates:
...which leads me to believe that, somehow, the older file is overwriting the newer file.
I've figured the issue out. BalusC's suggestion pointed me in the right direction.
ORDS expects providers to be registered through a file called META-INF/oracle.dbtools.plugin.api.di.providers. In this file is a list of classes described by their fully-qualified name which have been annotated with #Provides. Any class which doesn't appear in there will not be picked up by ORDS.
What I was running into, as highlighted by my question, was duplicate file names present within the JAR. If I observed it through Neovim, I'd see my FQN classes in one file, and none in another. If I observed it through Nautilus/File Extractor, I'd only see the file with none of my FQN classes.
The duplicate file issue turned out to be the smoking gun. In order for me to get this to work, I had to remove duplicates from my built JAR. In Gradle, the way to accomplish this is as thus:
jar {
duplicatesStrategy = DuplicatesStrategy.EXCLUDE
}
Now, only the correct *providers file shows up, and my servlets are able to be hit within ORDS.
I will point out that this was a surprise; I didn't anticipate any kind of duplicate files to be packaged within the JAR, nor did ORDS documentation potentially warn about this issue. I see this as a fair beacon to other devs to be mindful of this happening.
I see in source code of the demo plugin JAR that it registers itself with a SPI. That's the way how the ORDS core in WAR finds it. The provided Ant task in ORDS example folder takes care of generating the necessary SPI files while creating the JAR. You mentioned that you used a Gradle task for this, so I gather that you wrote it yourself.
In order to verify if your Gradle job generated the correct JAR too, extract the Gradle-produced plugin JAR file and inspect if there's a /META-INF/oracle.dbtools.plugin.api.di.providers file with the sole content the FQN of your TestServlet. If not, then it definitely won't be discovered by ORDS core in WAR.
You could confirm whether your plugin servlet's source code is correct by replacing the PluginDemo servlet source code with your own servlet contents and then building the JAR using the provided Ant task as instructed in the tutorial. If that works, then it's definitely the Gradle task which needs to be fixed and not your plugin servlet. However, a detailed answer on that can't be given as this information is missing in the question. But it should at least push you in the right direction in order to nail down the issue.
I am trying to make an app using GRPC protocols. In my .proto file I need to import timestamp class and as per google documentation,
import google/protobuf/timestamp.proto
is how we should add to proto file. But its giving me error
import google/protobuf/timestamp.proto is not found or has
errors
Anyone have any idea how to resolve this.
You are hitting a known issue; neither the well-known protos nor their generated code are included in protobuf-lite.
A workaround is to add an extra dependency and generate the code yourself. Assuming you are using Gradle and already using the com.google.protobuf plugin, you just need to add a protobuf dependency for the .proto files (or a JAR including the .proto files) you have a dependency on:
dependencies {
protobuf 'com.google.protobuf:protobuf-java:3.0.2'
}