I've been working on a plugin that requires a fair amount of data being stored.
I have it being stored in a custom config file I found online that works basically the same as the default config.
The problem I'm having is that I am not sure how to actually close the file or if I even need to, as I know little about yaml configurations.
The code for the template I used is below.
I'm also curious as to advice on how I should store larger amounts of data in the future.
public class CustomConfig {
//store name of file to load/edit
private final String fileName;
//store plugin, to get file directory
private final JavaPlugin plugin;
//store actual hard disk file location
private File configFile;
//store ram file copy location
private FileConfiguration fileConfiguration;
//constructor taking a plugin and filename
public CustomConfig(JavaPlugin plugin, String fileName) {
//ensure plugin exists to get folder path
if (plugin == null)
throw new IllegalArgumentException("plugin cannot be null");
//set this classes plugin variable to the one passed to this method
this.plugin = plugin;
//get name of file to load/edit
this.fileName = fileName;
//get directory/folder of file to load/edit
File dataFolder = plugin.getDataFolder();
if (dataFolder == null)
throw new IllegalStateException();
//load config file from hard disk
this.configFile = new File(plugin.getDataFolder(), fileName);
reloadConfig();
}
public void reloadConfig() {
//load memory file from the hard copy
fileConfiguration = YamlConfiguration.loadConfiguration(configFile);
// Look for defaults in the jar
File configFile = new File(plugin.getDataFolder(), fileName);
if (configFile != null) {
YamlConfiguration defConfig = YamlConfiguration.loadConfiguration(configFile);
fileConfiguration.setDefaults(defConfig);
}
}
public FileConfiguration getConfig() {
if (fileConfiguration == null) {
this.reloadConfig();
}
return fileConfiguration;
}
public void saveConfig() {
if (fileConfiguration == null || configFile == null) {
return;
} else {
try {
getConfig().save(configFile);
} catch (IOException ex) {
plugin.getLogger().log(Level.SEVERE, "Could not save config to " + configFile, ex);
}
}
}
public void saveDefaultConfig() {
if (!configFile.exists()) {
this.plugin.saveResource(fileName, false);
}
}
}
No. You do not have to close YamlConfiguration objects.
While the default config (JavaPlugin.getConfig()) is bound to the lifecycle of the plugin, custom ones are disposed when any other Java object is, i.e. when the garbage collector determines that there are no more references pointing to them in the code.
You don't need to close the config. It's not a BufferedWriter. The config keeps all of the data in the memory until the server shuts down. This means that if you change something in the config during the time your plugin is enabled, you will need to use your reloadConfig() method. The only clean up you need to do after using the FileConfiguration#set(String, Object) method is to use FileConfiguration#saveConfig() to tell Bukkit to take the current state of your config and copy it into your config file.
Related
When I build project by maven, it's OK, but when deploy it by Tomkat, I have NullPointerException.
Class, where can be problem - PropertiesManager.
logline: PropertiesManager.getApplicationProperties(PropertiesManager.java:31)
public class PropertiesManager {
private static final String PROPERTY_FILE_NAME =
"resources/application.properties";
private static PropertiesManager Instance;
private Properties properties;
private PropertiesManager() {
}
public static PropertiesManager getInstance() {
if (Instance == null) {
Instance = new PropertiesManager();
}
return Instance;
}
public Properties getApplicationProperties() {
if (properties == null) {
properties = new Properties();
try (InputStream stream = Thread.currentThread()
.getContextClassLoader()
.getResourceAsStream(PROPERTY_FILE_NAME)) {
properties.load(stream);
} catch (IOException e) {
throw new ApplicationException("Failed to load property file", e);
}
}
return properties;
}
}
And logline: ApplicationLifecycleListener.contextInitialized(ApplicationLifecycleListener.java:14)
Class ApplicationLifecycleListener:
public class ApplicationLifecycleListener implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent sce) {
Properties applicationProperties = PropertiesManager.getInstance().getApplicationProperties();
DBManager.getInstance().initialize(applicationProperties);
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
DBManager.getInstance().stopDb();
}
}
What is problem can be?
Without providing the file with the exact line you see the NullPointerException (none of the files you provided have the lines shown in log), it is difficult to be sure. But one hint is that although you put your resources files to be built with Maven in the '<project>/src/main/resources' folder, when built and packing the war file, it will put your application resource files in the 'WEB-INF/classes' folder which is part of the application default classpath. Therefore, to correctly reference them using the method Thread.currentThread().getContextClassLoader().getResourceAsStream(...) you should not add the 'resources\...' prefix to the file name, since this method already look files in the default application classpath. Remove the prefix and see if it works. Please, refer to this answer for more detail.
I have one method which write to a file. I need to synchronize file object
class MessageFile{
public static final String fileName="Main.html"
#AutoWired
AppConifg appconfig;
public boolean writeToFile(String fileContent) throws Exception{
String path = appConfig.getNewsPath() + File.separator + fileName; // getNewsPath is non-static method
final File alertFile= new File(path);
FileOutputStream out = null;
synchronized (alertFile) {
if (!alertFile.exists()) {
alertFile.createNewFile();
}
try {
out = new FileOutputStream(alertFile, false);
out.write(fileContent.getBytes());
out.flush();
} finally {
if (out != null) {
out.close();
}
}
}
return true;
}
}
But above code won`t take lock exclusive lock on file object as another instance of this class can have lock on this class and write to file.
So I want to how handle this case ?
I found one workaround creating a temporary file name appending time stamp (so temporary file name will be always unique) and after writing content to it , will first delete original file and then rename temporary file to original file name.
You can try synchronizing on MessageFile.class, if it is the only object accessing the file.
Your program does not get exclusive lock on file because you are using synchronized on a local variable alertFile that is not shared between instances of the class MessageFile (each object has its own alertFile). You have two possibilities to solve this:
1- Create some static object and synchronize on it (you may use fileName as it is already there).
2- Having a references in all the objects that point to the same object (passed in the constructor, for example) and synchronize on it.
You are creating new File object (alertFile) every time method is run, so the lock does nothing as it is different every time method is run - you need to have static File instance shared across all method calls.
If path can be different every time the method is run, you could create static Map<String, File> instance and use it like this:
Get path of the file.
If there is no File associated with this path, create it.
Otherwise, recover existing File instance from map.
Use this File as a lock and do operations on it.
Example based on modified answer:
class MessageFile{
public static final String fileName="Main.html"
#AutoWired
AppConifg appconfig;
private static final Map<String, File> filesMap = new HashMap<>();
public boolean writeToFile(String fileContent) throws Exception{
String path = appConfig.getNewsPath() + File.separator + fileName; // getNewsPath is non-static method
final File alertFile;
synchronized(filesMap) {
if (filesMap.containsKey(path)) {
alertFile = filesMap.get(path);
}
else {
alertFile = new File(path);
filesMap.put(path, alertFile);
}
}
FileOutputStream out = null;
synchronized (alertFile) {
if (!alertFile.exists()) {
alertFile.createNewFile();
}
try {
out = new FileOutputStream(alertFile, false);
out.write(fileContent.getBytes());
out.flush();
} finally {
if (out != null) {
out.close();
}
}
}
return true;
}
}
Synchronize on the class level object ie MessageFile.class or use a static synchronize method wrtietofile() . it will make sure only one thread writes into the file at a time . it also guarantees the lock will be released once the entire data is written to the file by a thread .
I want to run an action (with a rule) when a file enters the folder in my alfresco repository. The file needs to be moved to a new folder. The new folder will be named after the metadata property "subject" from the file I uploaded.
I am not able to figure out how to do this. Who got any tips?
(A repository webscript is also an option).
This is how I see it:
import java.util.List;
public class MoveExecuter extends ActionExecuterAbstractBase {
public static final String DESTINATION_FOLDER = "destination-folder";
private FileFolderService fileFolderService;
private NodeService nodeService;
#Override
protected void addParameterDefinitions(List<ParameterDefinition> paramList) {
paramList.add(
new ParameterDefinitionImpl(DESTINATION_FOLDER,
DataTypeDefinition.NODE_REF,
true,
getParamDisplayLabel(METADATA VALUE FROM FIELD SUBJECT FROM INCOMING FILE)));}
public void executeImpl(Action ruleAction, NodeRef actionedUponNodeRef) {
NodeRef destinationParent = (NodeRef)ruleAction.getParameterValue(DESTINATION_FOLDER);
// if the node exists
if (this.nodeService.exists(destinationParent) == true) {
try {
fileFolderService.move(incomingfile, destinationParent, null);
} catch (FileNotFoundException e) {
// Do nothing
}
if (this.nodeService.exists(destinationParent) == false) {
try {
nodeService.createNode(parentRef, assocTypeQName, assocQName, "metadata field subject");
fileFolderService.move(incomingfile, destinationParent, null);
} catch (FileNotFoundException e) {
// Do nothing
}
}
}
}
For such a simple action I'd just use a JavaScript instead of a java Action.
Install the JavaScript addon from googlecode or github (newer version)
And just write your Javascript code according the api and run it in runtime in the console to test your code.
I need to read a properties file in a glassfish 4 application. The file needs to be somewhere in the application (i.e. not at some random place in the file system).
If it matters, I'm developing with eclipse, the project builds with maven, and the artifact is a war.
It seems to me there are three things I need to know to make this work.
1) Where does the original file need to be?
2) Where does the file need to end up?
3) How do I read it?
So far I created the file:
src/main/resources/version.properties
which ends up in
WEB-INF/classes/version.properties
I don't know if that is the correct location.
Based on similar questions, I have defined a ServletContextListener:
public class ServletContextClass implements ServletContextListener {
...
#Override
public void contextInitialized(ServletContextEvent arg0) {
ServletContext ctx = arg0.getServletContext();
InputStream istream = ctx.getResourceAsStream("version.properties");
// at this point, istream is null
Properties p = new Properties();
p.load(istream);
}
}
I'm not sure if I have the file in the wrong place, if I'm reading it wrong, or both.
update: the following "works":
#Override
public void contextInitialized(ServletContextEvent arg0) {
ResourceBundle bundle = ResourceBundle.getBundle("version");
if (bundle == null) {
logger.info("bundle is null");
} else {
logger.info("bundle is not null");
logger.info("version: " + bundle.getString("myversion"));
}
}
However, I don't think this is the correct solution. Bundles are for locale support, and this does not fall under that category.
Update 2: I corrected the location where the file ends up.
1) Putting the version.properties file in
src/main/resources/version.properties
seems to be correct.
2) In the target war, the file does in fact end up in
WEB-INF/classes/version.properties
3) To read the file: I already had a ServletContextListener defined. If you don't you need to define one and configure it in web.xml. Here is a portion of my ServletContextListener:
package com.mycompany.service;
public class ServletContextClass implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent arg0) {
ServletContext ctx=arg0.getServletContext();
try {
Properties p = new Properties();
InputStream istream = ctx.getResourceAsStream("/WEB-INF/classes/version.properties");
p.load(istream);
Properties sysProps = System.getProperties();
sysProps.putAll(p);
} catch (IOException e) {
logger.error("Error reading " + "version.properties");
}
}
}
It is configured with this piece of the web.xml:
<listener>
<listener-class>com.mycompany.service.ServletContextClass</listener-class>
</listener>
The following code works when I execute the Pig script locally while specifying a local GeoIPASNum.dat file. However, it does not work when run in MapReduce distributed mode. What am I missing?
Pig job
DEFINE AsnResolver AsnResolver('/hdfs/location/of/GeoIPASNum.dat');
loaded = LOAD 'log_file' Using PigStorage() AS (ip:chararray);
columned = FOREACH loaded GENERATE AsnResolver(ip);
STORE columned INTO 'output/' USING PigStorage();
AsnResolver.java
public class AsnResolver extends EvalFunc<String> {
String ipAsnFile = null;
#Override
public String exec(Tuple input) throws IOException {
try {
LookupService lus = new LookupService(ipAsnFile,
LookupService.GEOIP_MEMORY_CACHE);
return lus.getOrg((String) input.get(0));
} catch (IOException e) {
}
return null;
}
public AsnResolver(String file) {
ipAsnFile = file;
}
...
}
The problem is that you are using a string reference to an HDFS path and the LookupService constructor can't resolve the file. It probably works when you run it locally since the LookupService has no problem with a file in your local FS.
Override the getCacheFiles method:
#Override
public List<String> getCacheFiles() {
List<String> list = new ArrayList<String>(1);
list.add(ipAsnFile + "#GeoIPASNum.dat");
return list;
}
Then change your LookupService constructor to use the Distributed Cache reference to "GeoIPASNum.dat" :
LookupService lus = new LookupService("GeoIPASNum.dat", LookupService.GEOIP_MEMORY_CACHE);
Search for "Distributed Cache" in this page of the Pig docs: http://pig.apache.org/docs/r0.11.0/udf.html
The example it shows using the getCacheFiles() method should ensure that the file is accessible to all the nodes in the cluster.