I have a memory disk issue in my Elastic Beanstalk instance due to the log rotation so I am trying to modify the default configuration for log rotation by following the documentation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
After adding my config and rebuilding the environment, I can see my config (in the path which I specified) when I connect via SSH to my EB. However, it looks like my changes are not applied and logs don't rotate according to my config.
##################################################################
## Sets up the elastic beanstalk log publication to include
## the admin logs for cloudwatch logs
##################################################################
Resources:
AWSEBAutoScalingGroup:
Metadata:
"AWS::CloudFormation::Init":
configSets:
"_OnInstanceBoot":
"CmpFn::Insert":
values:
- EBCWLLogPublicationSetup
EBCWLLogPublicationSetup:
files:
"/etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.awslogs.conf":
content: |
/var/log/awslogs.log {
size 2M
rotate 3
missingok
compress
notifempty
copytruncate
dateext
dateformat %s
olddir /var/log/rotated
}
mode: "000644"
My EB instance contains a Java application (dropwizard, Java 1.8) which is dockerized.
Any idea?
Finally, I could find a different approach which works:
container_commands:
01-custom-rotate:
command: "/bin/sed -i 's/size 10M/size 7M/g' /etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.awslogs.conf"
Basically it replaces a text in the config file. The EB still needs to be rebuilt to see the changes.
Related
I'm creating a custom Dockerfile with extensions for official keycloak docker image. I want to change web-context and add some custom providers.
Here's my Dockerfile:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/tools/cli/startup-config.cli
RUN /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
and startup-config.cli file:
/subsystem=keycloak-server/:write-attribute(name=web-context,value="keycloak/auth")
/subsystem=keycloak-server/:add(name=providers,value="module:module:x.y.z.some-custom-provider")
Bu unfortunately I receive such error:
The controller is not available at localhost:9990: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: Connection refused
The command '/bin/sh -c /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"' returned a non-zero code: 1
Is it a matter of invalid localhost? How should I refer to the management API?
Edit: I also tried with ENTRYPOINT instead of RUN, but the same error occurred during container initialization.
You are trying to have Wildfly load your custom config file at build-time here. The trouble is, that the Wildfly server is not running while the Dockerfile is building.
Wildfly actually already has you covered regarding automatically loading custom config, there is built in support for what you want to do. You simply need to put your config file in a "magic location" inside the image.
You need to drop your config file here:
/opt/jboss/startup-scripts/
So that your Dockerfile looks like this:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/startup-scripts/startup-config.cli
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
Excerpt from the keycloak documentation:
Adding custom script using Dockerfile
A custom script can be added by
creating your own Dockerfile:
FROM keycloak
COPY custom-scripts/ /opt/jboss/startup-scripts/
Now you can simply start the image, and the built features in keycloak (Wildfly feature really) will go look for a config in that spedific directory, and then attempt to load it up.
Edit from comment with final solution:
While the original answer solved the issue with being able to pass configuration to the server at all, an issue remained with the content of the script. The following error was received when starting the container:
=========================================================================
Executing cli script: /opt/jboss/startup-scripts/startup-config.cli
No connection to the controller.
=========================================================================
The issue turned out to be in the startup-config.cli script, where the jboss command embed-server was missing, needed to initiate a connection to the jboss instance. Also missing was the closing stop-embedded-server command. More about configuring jboss in this manner in the docs here: CHAPTER 8. EMBEDDING A SERVER FOR OFFLINE CONFIGURATION
The final script:
embed-server --std-out=echo
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheThemes,value=false)
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheTemplates,value=false)
stop-embedded-server
WildFly management interfaces are not available when building the Docker image. Your only option is to start the CLI in embedded mode as discussed here Running CLI commands in WildFly Dockerfile.
A more advanced approach consists in using the S2I installation scripts to trigger CLI commands.
I am trying out Stackify Prefix v3.0.18 to profile a Spring Boot application in WebLogic 12c. The JVM is started with the stackify-java-apm agent as per the instructions:
-javaagent:"C:\Program Files (x86)\StackifyPrefix\java\lib\stackify-java-apm.jar"
On accessing the Spring Boot Actuator's /health endpoint, I do not get anything reported in the Prefix dashboard at http://localhost:2012. Is anything amiss here?
A couple of observations were made; the Prefix agent was trying:
To load a properties file from a Linux/Unix path and failed to do so
16:16:24.826 [main] WARN com.stackify.apm.config.a - Unable to find properties file /usr/local/stackify/stackify-java-apm/stackify.properties
To write a file into a non-existent directory C:\Program Files (x86)\Stackify\stackify-java-apm\log\
I was unable to find an end-to-end demo or tutorial on setting up and using Prefix to profile a Java application.
I was looking on their support site and it seems that WebLogic 12c is not supported according to this link:
https://support.stackify.com/prefix-enable-java-profiling/
Have you tried submitting a ticket with them?
https://support.stackify.com/submit-a-ticket/
I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?
Example uses of the SLF4J Logger that I referenced for my implementation:
Kafka FileStreamSinkTask
Kafka FileStreamSourceTask
I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:
Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )
Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/
Make your connector package available to Kafka Connect in one of the following ways:
Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).
Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.
Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.
Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html
Briefly: confluent start
Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.
Once you have Connect worker running, start your connector by running:
confluent load <connector_name> -d <connector_config.properties>
or
confluent load <connector_name> -d <connector_config.json>
The connector configuration can be either in java properties or JSON format.
Run
confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running
cd "$( confluent current )"
Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:
export CONFLUENT_CURRENT=/opt/confluent
confluent current
Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :
confluent stop connect
export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
confluent start connect
and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.
I hope the above will make your connector development easier and ... more fun!
i love the accepted answer. one thing - the environment variables didn't work for me... i'm using confluent community edition 5.3.1...
here's what i did that worked...
i installed the confluent cli from here:
https://docs.confluent.io/current/cli/installing.html#tarball-installation
i ran confluent using the command confluent local start
i got the connect app details using the command ps -ef | grep connect
i copied the resulting command to an editor and added the arg (right after java):
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
then i stopped connect using the command confluent local stop connect
then i ran the connect command with the arg
brief intermission ---
vs code development is led by erich gamma - of gang of four fame, who also wrote eclipse. vs code is becoming a first class java ide see https://en.wikipedia.org/wiki/Erich_Gamma
intermission over ---
next i launched vs code and opened the debezium oracle connector folder (cloned from here) https://github.com/debezium/debezium-incubator
then i chose Debug - Open Configurations
and entered the highlighted debugging configuration
and then run the debugger - it will hit your breakpoints !!
the connect command should look something like this:
/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/logs -Dlog4j.configuration=file:/Users/myuserid/confluent-5.3.1/bin/../etc/kafka/connect-log4j.properties -cp /Users/myuserid/confluent-5.3.1/share/java/kafka/*:/Users/myuserid/confluent-5.3.1/share/java/confluent-common/*:/Users/myuserid/confluent-5.3.1/share/java/kafka-serde-tools/*:/Users/myuserid/confluent-5.3.1/bin/../share/java/kafka/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/dependant-libs-2.12.8/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/connect.properties
Connector module is executed by the kafka connector framework. For debugging, we can use the standalone mode. we can configure IDE to use the ConnectStandalone main function as entry point.
create debug configure as the following. Need remember to tick "Include dependencies with "Provided" scope if it is maven project
connector properties file need specify the connector class name "connector.class" for debugging
worker properties file can copied from kafka folder /usr/local/etc/kafka/connect-standalone.properties
I'am using debian distribution.I write a code in windows but I have no error and I create a database.Despite I prepare libraries in Debian, my database is not created and data is not added and in java program there was no error.
My Database Path.
dbPath=/var/lib/neo4j/data/graph.db
I guess error occurs about database proporties.
I have 2 different proporties so I don't know how can I set this settings.
-etc/neo4j
-/var/lib/neo4j/conf
You should have the /etc/neo4j/neo4j-server.properties file that typically begins like
################################################################
# Neo4j configuration
#
################################################################
#***************************************************************
# Server configuration
#***************************************************************
# location of the database directory
org.neo4j.server.database.location=data/graph.db
...
...
Where database path is relative.
If you want to have an absolute path, you should have this line:
org.neo4j.server.database.location=/var/lib/neo4j/data/graph.db
I am trying to populate my datastore Entity with data which I have in csv file but don't have success.
This is my CSV file places.csv:
name,placeId,location,key,address
A store at City1 Shopping Center,store101,"47,-122",1,"Some address of the store in City 1"
A big store at Some Mall,store102,"47,-122",2,"Some address of the store in City 2"
bulkloader.yaml:
python_preamble:
- import: base64
- import: re
- import: google.appengine.ext.bulkload.transform
- import: google.appengine.ext.bulkload.bulkloader_wizard
- import: google.appengine.ext.db
- import: google.appengine.api.datastore
- import: google.appengine.api.users
transformers:
- kind: Place
connector: csv
connector_options:
property_map:
- property: __key__
external_name: key
export_transform: transform.key_id_or_name_as_string
- property: address
external_name: address
# Type: String Stats: 6 properties of this type in this kind.
- property: location
external_name: location
# Type: GeoPt Stats: 6 properties of this type in this kind.
import_transform: google.appengine.api.datastore_types.GeoPt
- property: name
external_name: name
# Type: String Stats: 6 properties of this type in this kind.
- property: placeId
external_name: placeId
# Type: String Stats: 6 properties of this type in this kind
upload_data.sh:
#!/bin/sh
../Eclipse/plugins/com.google.appengine.eclipse.sdkbundle_1.9.1/appengine-java-sdk-1.9.1/bin/appcfg.sh upload_data --config_file bulkloader.yaml --url=http://localhost:8888/remote_api --filename places.csv --kind=Place -e nobody#nowhere.com
I created folder gae and placed there upload_data.sh, bulkloader.yaml and places.csv.
After I run sudo ./upload_data.sh, I receive the message:
sudo: ./upload_data.sh: command not found
After I run sudo sh upload_data.sh I receive the following error:
Bad argument: Expected an action: [update, request_logs, rollback, update_indexes, update_cron, update_dispatch, update_dos, update_queues, cron_info, vacuum_indexes, help, download_app, version, set_default_version, resource_limits_info, start_module_version, stop_module_version, backends list, backends rollback, backends update, backends start, backends stop, backends delete, backends configure, backends, list_versions, delete_version, debug]
usage: AppCfg [options] <action> [<app-dir>] [<argument>]
Action must be one of:
help: Print help for a specific action.
download_app: Download a previously uploaded app version.
request_logs: Write request logs in Apache common log format.
rollback: Rollback an in-progress update.
start_module_version: Start the specified module version.
stop_module_version: Stop the specified module version.
update: Create or update an app version.
update_indexes: Update application indexes.
update_cron: Update application cron jobs.
update_queues: Update application task queue definitions.
update_dispatch: Update the application dispatch configuration.
update_dos: Update application DoS protection configuration.
version: Prints version information.
set_default_version: Set the default serving version.
cron_info: Displays times for the next several runs of each cron job.
resource_limits_info: Display resource limits.
vacuum_indexes: Delete unused indexes from application.
backends list: List the currently configured backends.
backends update: Update the specified backend or all backends.
backends rollback: Roll back a previously in-progress update.
backends start: Start the specified backend.
backends stop: Stop the specified backend.
backends delete: Delete the specified backend.
backends configure: Configure the specified backend.
list_versions: List the currently uploaded versions.
delete_version: Delete the specified version.
Use 'help <action>' for a detailed description.
options:
-s SERVER, --server=SERVER
The server to connect to.
-e EMAIL, --email=EMAIL
The username to use. Will prompt if omitted.
-H HOST, --host=HOST Overrides the Host header sent with all RPCs.
-p PROXYHOST[:PORT], --proxy=PROXYHOST[:PORT]
Proxies requests through the given proxy server.
If --proxy_https is also set, only HTTP will be
proxied here, otherwise both HTTP and HTTPS will.
--proxy_https=PROXYHOST[:PORT]
Proxies HTTPS requests through the given proxy server.
--no_cookies Do not save/load access credentials to/from disk.
--sdk_root=root Overrides where the SDK is located.
--passin Always read the login password from stdin.
-A APP_ID, --application=APP_ID
Override application id from appengine-web.xml or app.yaml
-M MODULE, --module=MODULE
Override module from appengine-web.xml or app.yaml
-V VERSION, --version=VERSION
Override (major) version from appengine-web.xml or app.yaml
--oauth2 Use OAuth2 instead of password auth.
--enable_jar_splitting
Split large jar files (> 10M) into smaller fragments.
--jar_splitting_excludes=SUFFIXES
When --enable-jar-splitting is set, files that match
the list of comma separated SUFFIXES will be excluded
from all jars.
--disable_jar_jsps
Do not jar the classes generated from JSPs.
--enable_jar_classes
Jar the WEB-INF/classes content.
--delete_jsps
Delete the JSP source files after compilation.
--retain_upload_dir
Do not delete temporary (staging) directory used in
uploading.
--compile_encoding
The character encoding to use when compiling JSPs.
-n NUM_DAYS, --num_days=NUM_DAYS
Number of days worth of log data to get. The cut-off
point is midnight UTC. Use 0 to get all available
logs. Default is 1.
--severity=SEVERITY Severity of app-level log messages to get. The range
is 0 (DEBUG) through 4 (CRITICAL). If omitted, only
request logs are returned.
-a, --append Append to existing file.
-n NUM_RUNS, --num_runs=NUM_RUNS
Number of scheduled execution times to compute
-f, --force Force deletion of indexes without being prompted.
What can I do to upload that data to datastore? Thank you.
I think you are using appcfg.sh instead of appcfg.py. See:
https://developers.google.com/appengine/docs/python/tools/uploadingdata
Also, your output clearly shows why you got the Bad Argument error - the action parameters listed by appcfg.sh do not include "update_data", but that is what your script passes as the action.
I was doing this exact thing and didn't immediately make the leap of intuition either:
Download the python SDK, which will give you the appcfg.py tool. Just call that one in your upload_data.sh script.
The appcfg.sh program doesn't have the upload_data action. Which I found weird.