changing log4j file name programatically in osgi maven bundle not working - java

Im developing a maven-osgi bundle and deploying in karaf.. In that, a piece of code, should get .cfg files from the karaf/etc and im programatically changing them at runtime.. writeTrace() is invoked within 'for loop' from another class. So that I can create different files and corresponding logging should go in to that file.
public void writeLog(int i,String HostName) {
StringBuilder sb = new StringBuilder();
sb.append("\n HEADER : \n");
....
String str = sb.toString();
String logfile = ("/home/Dev/" + HostName + i);
logger = LoggerFactory.getLogger("TracerLog");
updateLog4jConfiguration(logfile);
logger.error(str + i);}
public void updateLog4jConfiguration(String logFile) {
Properties props = new Properties();
try {
// InputStream configStream = getClass().getResourceAsStream(
// "/home/Temp-files/NumberGenerator/src/main/java/log4j.properties");
InputStream configStream = new FileInputStream("etc/org.ops4j.pax.logging.cfg");
props.load(configStream);
System.out.println(configStream);
configStream.close();
} catch (IOException e) {
System.out.println("Error: Cannot laod configuration file ");
}
props.setProperty("log4j.appender.Tracer.File", logFile);
LogManager.resetConfiguration();
PropertyConfigurator.configure(props);
}
and I am able to see new files created with hostname such as (hostname_1 , hostname_2, etc..) but logging happens only at actual appender configured at karaf/etc... thaat is log.txt..
log4j.logger.TracerLog=TRACE,Tracer
log4j.appender.Tracer=org.apache.log4j.RollingFileAppender
log4j.appender.Tracer.MaxBackupIndex=10
log4j.appender.Tracer.MaxFileSize=500KB
log4j.appender.Tracer.File=/home/Dev/log.txt
I got struck in this error.. Dont know whether it has to do something with the karaf or problem with code..???

Why aren't you just using the ConfigurationAdminService for this, instead of altering the file?
Just reference the configuration admin service from the registry and take the configuration with the PID org.ops4j.pax.logging.
With this approach you will have all configuration properties available for your proposal and it is in your code to alter this. It's also possible for you to add new configuration entries. In the end the combination of ConfigurationAdminService and the felix FileInstaller will even persist your changes back to the configuration file.
Btw. did you know that there is a shell command for configuring configurations, so actually also to alter the configuration for the org.ops4j.pax.logging service?
Just do a:
config:list
to retrieve all configurations available
and a
config:list "(service=org.ops4j.pax.logging)"
to retrieve just this information.

Related

Save a variable when the server is off

In fact I am making a Minecraft plugin and I was wondering how some plugins (without using DB) manage to keep information even when the server is off.
For example if we make a grade plugin and we create a different list or we stack the players who constitute each. When the server will shut down and restart afterwards, the lists will become empty again (as I initialized them).
So I wanted to know if anyone had any idea how to keep this information.
If a plugin want to save informations only for itself, and it don't need to make it accessible from another way (a PHP website for example), you can use YAML format.
Create the config file :
File usersFile = new File(plugin.getDataFolder(), "user-data.yml");
if(!usersFile.exists()) { // don't exist
usersFile.createNewFile();
// OR you can copy file, but the plugin should contains a default file
/*try (InputStream in = plugin.getResource("user-data.yml");
OutputStream out = new FileOutputStream(usersFile)) {
ByteStreams.copy(in, out);
} catch (Exception e) {
e.printStackTrace();
}*/
}
Load the file as Yaml content :
YamlConfiguration config = YamlConfiguration.loadConfiguration(usersFile);
Edit content :
config.set(playerUUID, myVar);
Save content :
config.save(usersFile);
Also, I suggest you to make I/O async (read & write) with scheduler.
Bonus:
If you want to make ONE config file per user, and with default config, do like that :
File oneUsersFile = new File(plugin.getDataFolder(), playerUUID + ".yml");
if(!oneUsersFile.exists()) { // don't exist
try (InputStream in = plugin.getResource("my-def-file.yml");
OutputStream out = new FileOutputStream(oneUsersFile)) {
ByteStreams.copy(in, out); // copy default to current
} catch (Exception e) {
e.printStackTrace();
}
}
YamlConfiguration userConfig = YamlConfiguration.loadConfiguration(oneUsersFile);
PS: the variable plugin is the instance of your plugin, i.e. the class which extends "JavaPlugin".
You can use PersistentDataContainers:
To read data from a player, use
PersistentDataContainer p = player.getPersistentDataContainer();
int blocksBroken = p.get(new NamespacedKey(plugin, "blocks_broken"), PersistentDataType.INTEGER); // You can also use DOUBLE, STRING, etc.
The Namespaced key refers to the name or pointer to the data being stored. The PersistentDataType refers to the type of data that is being stored, which can be any Java primitive type or String. To write data to a player, use
p.set(new NamespacedKey(plugin, "blocks_broken"), PersistentDataType.INTEGER, blocksBroken + 1);

Hive UDF in Java fails when creating a table

What is the difference between those two queries:
SELECT my_fun(col_name) FROM my_table;
and
CREATE TABLE new_table AS SELECT my_fun(col_name) FROM my_table;
Where my_fun is a java UDF.
I'm asking, because when I create new table (second query) I receive a java error.
Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed
...
Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: Unable to instantiate UDF implementation class com.company_name.examples.ExampleUDF: java.lang.NullPointerException
I found that the source of error is line in my java file:
encoded = Files.readAllBytes(Paths.get(configPath));
But the question is why it works when table is not created and fails if table is created?
The problem might be with the way you read the file. Try to pass the file path as the second argument in the UDF, then read as follows
private BufferedReader getReaderFor(String filePath) throws HiveException {
try {
Path fullFilePath = FileSystems.getDefault().getPath(filePath);
Path fileName = fullFilePath.getFileName();
if (Files.exists(fileName)) {
return Files.newBufferedReader(fileName, Charset.defaultCharset());
}
else
if (Files.exists(fullFilePath)) {
return Files.newBufferedReader(fullFilePath, Charset.defaultCharset());
}
else {
throw new HiveException("Could not find \"" + fileName + "\" or \"" + fullFilePath + "\" in inersect_file() UDF.");
}
}
catch(IOException exception) {
throw new HiveException(exception);
}
}
private void loadFromFile(String filePath) throws HiveException {
set = new HashSet<String>();
try (BufferedReader reader = getReaderFor(filePath)) {
String line;
while((line = reader.readLine()) != null) {
set.add(line);
}
} catch (IOException e) {
throw new HiveException(e);
}
}
The full code for different generic UDF that utilizes file reader can be found here
I think there are several points unclear, so this answer is based on assumptions.
First of all, it is important to understand that hive currently optimize several simple queries and depending on the size of your data, the query that is working for you SELECT my_fun(col_name) FROM my_table; is most likely running locally from the client where you are executing the job, that is why you UDF can access your config file locally available, this "execution mode" is because the size of your data. CTAS trigger a job independent on the input data, this job runs distributed in the cluster where each worker fail accessing your config file.
It looks like you are trying to read your configuration file from the local file system, not from the HDSFS Files.readAllBytes(Paths.get(configPath)), this means that your configuration has to either be replicated in all the worker nodes or be added previously to the distributed cache (you can use add file from this, doc here. You can find another questions here about accessing files from the distributed cache from UDFs.
One additional problem is that you are passing the location of your config file through an environment variable which is not propagated to worker nodes as part of your hive job. You should pass this configuration as a hive config, there is an answer for accessing Hive Config from UDF here assuming that you are extending GenericUDF.

best way to store data in Java like pickle

Basically, I just want to save two integeres into a File, so that i can reuse them the next time the programm starts. Id like to do it like pickle in python, beacuse writing it just into a txt file is cumbersome. I have read some articles and other questions wehere they say I should use Java serializatio or XML or JSON, but Im not sure wheter that is the right thing in my case. Id like to use the easiest way.
thank you very much in advance for trying to solve my problem! <3
You could use serialization, XML or JSON (usually with additional libraties). An easy way to store configuration data in files is by using Java property files which are supported by the JRE without any additional dependencies. Property files are text files and have a simple key=value syntax, see below. To write two values to a property file you can do
String prop1 = "foo";
String prop2 = "bar";
try (OutputStream output = new FileOutputStream("config.properties")) {
Properties prop = new Properties();
// set the properties value
prop.setProperty("prop1", prop1);
prop.setProperty("prop2", prop2);
// save properties to project root folder
prop.store(output, "my app's config file");
} catch (IOException io) {
io.printStackTrace();
// TODO: improve error handling
}
which should give you something like
#my app's config file
#Sat Feb 29 12:29:27 CET 2020
prop2=bar
prop1=foo
And to load it:
try (InputStream input = new FileInputStream("config.properties")) {
Properties prop = new Properties();
// load a properties file
prop.load(input);
// get the property value and print it out
String prop1 = prop.getProperty("prop1");
String prop2 = prop.getProperty("prop2");
System.out.println("prop1 = " + prop1);
System.out.println("prop2 = " + prop2);
} catch (IOException ex) {
ex.printStackTrace();
// TODO: improve error handling
}
For integer values you would need some type conversion, e.g. the first two lines would be
String prop1 = Integer.toString(23);
String prop2 = Integer.toString(42);
and reading the properties then becomes
int prop1 = Integer.parseInt(prop.getProperty("prop1"));
int prop2 = Integer.parseInt(prop.getProperty("prop2"));
This solution does not scale well in case the number of properties increases or there are frequent changes. For a more generic procedure, see this post: Get int, float, boolean and string from Properties

EC2 UserData not working with Java SDK

I am trying to send some userData while spawning a new instance but unfortunately it is not working. The code is:
For debugging purposes, I have just used an echo statement, but I cannot find any new file generated on the machine. I also checked the cloud-init logs in /var/log folder, but none of them are present.
Can anyone help me to figure out a way to debug this problem or is there something crucial that I am missing?
I am using C4.8xLarge instances for the reference.
public static String getUserData() throws UnsupportedEncodingException {
String userData = "";
userData = userData + "#!/bin/bash" + "\n";
userData += "echo hello > hello" + "\n";
String base64UserData = null;
try {
base64UserData = new String(Base64.encodeBase64(userData.getBytes("UTF-8")), "UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
return base64UserData;
}
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
runInstancesRequest.setImageId(AMI_ID);
runInstancesRequest.setEbsOptimized(true);
runInstancesRequest.setInstanceType(INSTANCE_TYPE);
runInstancesRequest.setMinCount(1);
runInstancesRequest.setMaxCount(1);
runInstancesRequest.withSecurityGroups("JavaSecurityGroup1");
runInstancesRequest.withUserData(getUserData());
List<BlockDeviceMapping> map = new ArrayList<>();
map.add(new BlockDeviceMapping().withEbs(new EbsBlockDevice().withSnapshotId("snap-af8s67ef").withIops(9000).withVolumeSize(300).withVolumeType("io1")).withDeviceName("/dev/sdf"));
runInstancesRequest.withBlockDeviceMappings(map);
RunInstancesResult runInsRes = ec2.runInstances(runInstancesRequest);
Thanks!
After creating an instance, go into the EC2 web console and view the user data on the instance. You should be able to view it as plain text in the web console. If it isn't there, or if it contains something other than the two lines you are trying to set as the user data, then you will know there is an issue with the way you are setting the user data.
If it is there and looks correct, then you will need to look into the cloud-init service log /var/log/cloud-init-output.log on the created instance to see what the error is.
Edit: I just noticed you said the cloud-init logs are not present on the machine. What OS are you using for this server? It may not have the cloud-init service.

BIRT Error : Unable to determine the default workspace location in Java

I get the following error
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
which makes little sense.
Reports are created using the BIRT designer within Eclipse, and we are using code to covert the reports in to PDF.
the code looks something like
final EngineConfig config = new EngineConfig();
config.setBIRTHome("./birt");
Platform.startup(config);
final IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
final HTMLRenderOption ho = new HTMLRenderOption();
ho.setImageHandler(new HTMLCompleteImageHandler());
config.setEmitterConfiguration(RenderOption.OUTPUT_FORMAT_HTML, ho);
// Create the engine.
this.engine = factory.createReportEngine(config);
final IReportRunnable report = this.engine.openReportDesign(reportName);
final IRunAndRenderTask task = this.engine.createRunAndRenderTask(report);
final RenderOption options = new HMTLRenderOption();
options.setOutputFormat(HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFormat("pdf");
final String output = reportName.replaceFirst(".rptdesign", ".xls");
final String output = name.replaceFirst(".rptdesign", "." + HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFileName( outputReporttName);
task.setRenderOption(options);
// Run the report.
task.run();
but it seems during the task.run() method, the system throws the error.
This needs to be able to run standalone, without the need of eclipse, and hopped thatt he setting of BIRT home would make it happy, but these seems to be some other connection profile i am unaware of and probably don't need.
The full error :
07-Jan-2013 14:55:31 org.eclipse.datatools.connectivity.internal.ConnectivityPlugin log
SEVERE: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
07-Jan-2013 14:55:31 org.eclipse.birt.report.engine.api.impl.EngineTask handleFatalExceptions
SEVERE: An error happened while running the report. Cause:
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getDefaultStateLocation(ConnectivityPlugin.java:155)
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getStorageLocation(ConnectivityPlugin.java:191)
at org.eclipse.datatools.connectivity.internal.ConnectionProfileMgmt.getStorageLocation(ConnectionProfileMgmt.java:1060)
at org.eclipse.datatools.connectivity.oda.profile.internal.OdaProfileFactory.defaultProfileStoreFile(OdaProfileFactory.java:170)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.defaultProfileStoreFile(OdaProfileExplorer.java:138)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.loadProfiles(OdaProfileExplorer.java:292)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.getProfileByName(OdaProfileExplorer.java:537)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getConnectionProfileImpl(ProfilePropertyProviderImpl.java:184)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getDataSourceProperties(ProfilePropertyProviderImpl.java:64)
at org.eclipse.datatools.connectivity.oda.consumer.helper.ConnectionPropertyHandler.getEffectiveProperties(ConnectionPropertyHandler.java:123)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.getEffectiveProperties(OdaConnection.java:826)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.open(OdaConnection.java:240)
at org.eclipse.birt.data.engine.odaconsumer.ConnectionManager.openConnection(ConnectionManager.java:165)
at org.eclipse.birt.data.engine.executor.DataSource.newConnection(DataSource.java:224)
at org.eclipse.birt.data.engine.executor.DataSource.open(DataSource.java:212)
at org.eclipse.birt.data.engine.impl.DataSourceRuntime.openOdiDataSource(DataSourceRuntime.java:217)
at org.eclipse.birt.data.engine.impl.QueryExecutor.openDataSource(QueryExecutor.java:407)
at org.eclipse.birt.data.engine.impl.QueryExecutor.prepareExecution(QueryExecutor.java:317)
at org.eclipse.birt.data.engine.impl.PreparedQuery.doPrepare(PreparedQuery.java:455)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.produceQueryResults(PreparedDataSourceQuery.java:190)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.execute(PreparedDataSourceQuery.java:178)
at org.eclipse.birt.data.engine.impl.PreparedOdaDSQuery.execute(PreparedOdaDSQuery.java:145)
at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.execute(DataRequestSessionImpl.java:624)
at org.eclipse.birt.report.engine.data.dte.DteDataEngine.doExecuteQuery(DteDataEngine.java:152)
at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.execute(AbstractDataEngine.java:267)
at org.eclipse.birt.report.engine.executor.ExecutionContext.executeQuery(ExecutionContext.java:1939)
at org.eclipse.birt.report.engine.executor.QueryItemExecutor.executeQuery(QueryItemExecutor.java:80)
at org.eclipse.birt.report.engine.executor.TableItemExecutor.execute(TableItemExecutor.java:62)
at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplicateItemExecutor.execute(SuppressDuplicateItemExecutor.java:43)
at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportItemExecutor.execute(WrappedReportItemExecutor.java:46)
at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:34)
at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:65)
at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92)
at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:180)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run (RunAndRenderTask.java:77)
has anyone seen this error and can point me in the right direction ?
When I had this issue then I tried two things. The first thing solved the error but then I just got to the next error.
The first thing I tried was setting the setenv.sh file to have the following line:
export CATALINA_OPTS="$CATALINA_OPTS -Djava.io.tmpdir=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir -Dorg.eclipse.datatools_workspacepath=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir/workspace_dtp"
This solution worked after I made the tmpdir and the workspace_dtp directories in my local tomcat server. This was done in response to the guidance here.
However, I just got to the next error, which was a connection profile error. I can look into it again if you need. I know how to replicate the issue.
The second thing I tried ended up solving the issue completely and had to do with our report designer selecting the wrong type of datasource in the report design process. See my post on the Eclipse BIRT forums here for the full story: post.
Basically, the report type was set to "JDBC Database Connection for Query Builder" when it should have been set to "JDBC Data Source." See the picture for reference:
Here I give you a tip that save me from that pain :
just launch eclipse with "-clean" option after installing BIRT plugins.
To be clear, my project was built from BIRT maven dependencies, and so should not use eclipse dependencies to run (except for designing reports), but ... i think there was a conflict somewhere ... especially with org.eclipse.datatools.connectivity_1.2.4.v201202041105.jar
For global understanding, you should follow the migration guide :
http://wiki.eclipse.org/Birt_3.7_Migration_Guide#Connection_Profiles
It helps using a connection profile to externalize datasource parameters.
So it's not required if you define JDBC parameters directly in report design.
I used this programmatic way to initialize worskpace directory :
#Override
public void initializeEngine() throws BirtException {
// define eclipse datatools workspace path (required)
String workspacePath = setDataToolsWorkspacePath();
// set configuration
final EngineConfig config = new EngineConfig();
config.setLogConfig(workspacePath, Level.WARNING);
// config.setResourcePath(getSqlDriverClassJarPath());
// startup OSGi framework
Platform.startup(config); // really needed ?
IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
engine.changeLogLevel(Level.WARNING);
}
private String setDataToolsWorkspacePath() {
String workspacePath = System.getProperty(DATATOOLS_WORKSPACE_PATH);
if (workspacePath == null) {
workspacePath = FilenameUtils.concat(SystemUtils.getJavaIoTmpDir().getAbsolutePath(), "workspace_dtp");
File workspaceDir = new File(workspacePath);
if (!workspaceDir.exists()) {
workspaceDir.mkdir();
}
if (!workspaceDir.canWrite()) {
workspaceDir.setWritable(true);
}
System.setProperty(DATATOOLS_WORKSPACE_PATH, workspacePath);
}
return workspacePath;
}
I also needed to force datasource parameters at runtime this way :
private void generateReportOutput(InputStream reportDesignInStream, File outputFile, OUTPUT_FORMAT outputFormat,
Map<PARAM, Object> params) throws EngineException, SemanticException {
// Open a report design
IReportRunnable design = engine.openReportDesign(reportDesignInStream);
// Use data-source properties from persistence.xml
forceDataSource(design);
// Create RunAndRender task
IRunAndRenderTask runTask = engine.createRunAndRenderTask(design);
// Use data-source from JPA persistence context
// forceDataSourceConnection(runTask);
// Define report parameters
defineReportParameters(runTask, params);
// Set render options
runTask.setRenderOption(getRenderOptions(outputFile, outputFormat, params));
// Execute task
runTask.run();
}
private void forceDataSource(IReportRunnable runableReport) throws SemanticException {
DesignElementHandle designHandle = runableReport.getDesignHandle();
Map<String, String> persistenceProperties = PersistenceUtils.getPersistenceProperties();
String dsURL = persistenceProperties.get(AvailableSettings.JDBC_URL);
String dsDatabase = StringUtils.substringAfterLast(dsURL, "/");
String dsUser = persistenceProperties.get(AvailableSettings.JDBC_USER);
String dsPass = persistenceProperties.get(AvailableSettings.JDBC_PASSWORD);
String dsDriver = persistenceProperties.get(AvailableSettings.JDBC_DRIVER);
SlotHandle dataSources = ((ReportDesignHandle) designHandle).getDataSources();
int count = dataSources.getCount();
for (int i = 0; i < count; i++) {
DesignElementHandle dsHandle = dataSources.get(i);
if (dsHandle != null && dsHandle instanceof OdaDataSourceHandle) {
// replace connection properties from persistence.xml
dsHandle.setProperty("databaseName", dsDatabase);
dsHandle.setProperty("username", dsUser);
dsHandle.setProperty("password", dsPass);
dsHandle.setProperty("URL", dsURL);
dsHandle.setProperty("driverClass", dsDriver);
dsHandle.setProperty("jarList", getSqlDriverClassJarPath());
// #SuppressWarnings("unchecked")
// List<ExtendedProperty> privateProperties = (List<ExtendedProperty>) dsHandle
// .getProperty("privateDriverProperties");
// for (ExtendedProperty extProp : privateProperties) {
// if ("odaUser".equals(extProp.getName())) {
// extProp.setValue(dsUser);
// }
// }
}
}
}
I was having the same issue
Changing the Data Source type from "JDBC Database Connection for Query Builder" to "JDBC Data Source" solved the problem for me.

Categories

Resources