My .htaccess file contains url mappings and my script create these entries once a day.
As .htaccess is a hidden file on Server, when the script tries to overwrite this .htaccess file I get
an error message Access Denied.
Is there a way to overwrite the file.
I suspect that the problem here is that you're on Windows, and Windows doesn't especially like filenames that begin with a dot (it thinks you're creating a file with an empty "name", and a "htaccess" extension).
The fastest solution might just be to change the name of the file that Apache's looking for to e.g. htaccess.txt using the AccessFileName directive.
The fact that the name starts with a . has nothing to do with the access permissions.
Check ls -l /path/to/.htaccess output for the user:group and permissions on the file, and make sure that your script executes with sufficient privileges to write the file. This might mean running your script in the crontab(5) of your webserver, or it might mean running chown(1) to change the owner to whoever should be running your script, or using chown(1) to change the group of the file to the group of the program, and then using chmod(1) to allow group-writes.
It depends upon what you really want to accomplish.
Try deleting the file and let your script create it next time it runs (or force the generation). This way the user which runs the script will be the owner of the file, so it should work from then on.
Also check if your script can create files in that directory.
Related
Linux Api has O_TMPFILE flag to be specified with open system call creating unnamed temporary file which cannot be opened by any path. So we can use this to write data to the file "atmoically" and the linkat the given file to the real path. According to the open man page it can be implemented as simple as
char path[1000];
int fd = open("/tmp", O_TMPFILE | O_WRONLY, S_IWUSR);
write(fd, "123456", sizeof("123456"));
sprintf(path, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/tmp/1111111", AT_SYMLINK_FOLLOW);
Is there a Java alternative (probably non crossplatform) to do atomic write to a file without writing Linux-specific JNI function? Files.createTempFile does completely different thing.
By atomic write I mean that either it cannot be opened and be read from or it contains all the data required to be writted.
I don't believe Java has an API for this, and it seems to depend on both the OS and filesystem having support, so JNI might be the only way, and even then only on Linux.
I did a quick search for what Cygwin does, seems to be a bit of a hack just to make software work, creating a file with a random name then excluding it only from their own directory listing.
I believe the closest you can get in plain Java is to create a file in some other location (kinda like a /proc/self/fd/... equivalent), and then when you are done writing it, either move it or symbolic link it from the final location. To move the file, you want it on the same filesystem partition so the file contents don't actually need to be copied. Programs watching for the file in say /tmp/ wouldn't see it until the move or sym link creation.
You could possibly play around with user accounts and filesystem permissions to ensure that no other (non SYSTEM/root) program can see the file initially even if they tried to look wherever you hid it.
Task: Copy Folder and contents from one vdi to another vdi. This application is internally facing within the company.
Method:
In jsp have user browse for folder
The folder selection is in a text box, the folder path is passed into an action class
The folder path is placed into a teradata table
A script is called to query the table for the source path and target path (pre-determined) and make the copy
Due Dilligence: So far I have tried the <input type="file", which selects a file, not a folder. Also, the file path is not passed through due to security reasons. I have read other possible solutions but none work.
Question: Are sevlets a viable solution, and if so, how do I create one?
I'm going to go with no. There are several reasons for this.
A Java Enterprise Edition application (be it a Servlet or Java Server Page) is not supposed to access the file system directly.
It is inherently unsafe to expose internal infrastructure through an external website.
I think you need to break it up a bit more.
Save a list of shares the server has access to in a data store of some sort, like a new teradata table or for a quick proof of concept plain text file (if you're on Linux you can use the output of something like showmount -e localhost).
Let the user pick the src share from a combobox or something similar.
Continue from your step 2.
This gives you two immediately obviously advantages, which may or may not be relevant.
You can use the system without having access to the physical shares.
You can add metadata (like description or aliases).
I have a piece of JAVA code which reads a few files and keeps them loaded into memory for sometime. The file handles are preserved after reading. My problem here is that I want to restrict user from deleting these files using "DEL" key or rm command.
I could achieve the same on windows by preserving file handles while on Unix rm does not honour the lock on the files. I even tried Filechannel.lock() but it did not help either.
Any suggestions are appreciated.
As long as you have the handle open, they can remove the file from a directory, but they can't delete the file. i.e. the file isn't removed until you close the file or your process dies.
I even tried Filechaanel.lock() but it did not help either.
That is because it's the directory, not the file that is being altered. e.g. if they have write access to the file but not the directory they cannot delete it.
You could also look into chattr which can be used to lock the file.
chattr +i filename
Should render the file undeletable. You can then make it deleteable again via...
chattr -i filename
There is no pure Java solution to this. In fact, I don't think there is a solution at all that doesn't have potentially nasty consequences. The fundamental problem is that UNIX / LINUX doesn't have a way to temporarily place a mandatory lock on a file. (The Linux syscall for locking a file is flock, but flock-style locks are discretionary. An application that doesn't bother to flock a file won't be affected by other applications locks on the file.)
The best you can do is to use chattr +i to set the "immutable" attribute on the file. Unfortunately, that has other effects:
The immutable file cannot be written to or linked to either.
If your application crashes without unsetting the attribute, the user is left with a file that he / she mysteriously cannot change or delete. Not even with sudo or su.
One of my website pages (written in PHP) manipulates a MySQL database before starting a .jar archive in background, with the following command :
nohup java -jar myJar.jar > /dev/null &
This jar creates a text file in a folder (the current one or subfolders). For my Java program to write file, I have to set the w (write) permission to a (all users) on the www folder (or one of its subfolders).
Based on what I read, one of the solutions would be to give the write permission only to www-data, which would be Apache. Howerver I cannot see how it is more secure than a 777 chmod, because a hacker would always have the permission to write through his browser.
Do you know a solution which would :
Make my server as safe as possible.
Allow my Java program (launched by PHP) to create and modify files on the server.
run your java program as a deamon with its own user with a privelege to edit that specific folder.
Set it to monitor a file or database to see if it needs to run and do its thing. Then when your php script needs it just modify the file/database.
I'm unsure of the best solution for this but this is what I've done.
I'm using PHP to look into a directory that contains zip files.
These zip files contain text files that are to be loaded into an oracle database through SqlLoader (sqlldr).
I want to be able to start more than one PHP process via the command line to load these zip files into the db.
If other 'php loader' processes are running, they shouldn't overlap and try to load the same zip file. I know I could start one process and let it process each zip file but I'd rather start up a new process for incoming zip files so I can load concurrently.
Right now, I've created a class that will 'lock' a zip file, a directory, or a generic text file by creating a file called 'filename.ext.lock'. Other process that start up will check to see if a file has been 'locked' in this way, if it has it will skip that file and move on to another file for processing.
I've made a class that uses a directory and creates 'process id' files so that each PHP process has an id it can use for logging purposes and for identifying which PHP process has locked the file.
I'm on a windows machine and it isn't in the plan to make this an ubuntu machine, for those of you that might suggest pcntl.
What other solutions do you see? I know that this isn't truly synchronized because a lock file might be about to be created and then a context switch occurs and then another PHP process 'locks' the file before the first one can create the lock file.
Can you please provide me with some ideas about how I can make this solution better? A java implementation? Erlang?
Also forgot to mention, the PHP process connects to the DB to fetch metadata about the files that it is going to load via SqlLoader. I don't think that is important but just in case.
Quick note : I'm aware that sqlldr locks the table it is loading and that if multiple processes try to load to the same table it will become a bottle neck. To alleviate this problem I plan on making a directory that will contain files name after tables that are currently being loaded. After a table has completed loading the respective file will be deleted and other processes will check that it is safe to load that table.
Extra information : I'm using 7zip to unzip the files and php's exec to perform these commands.
I'm using exec to call sqlldr as well.
The zip files can be huge (1gb) and loading one table can take up to an 1hr.
Rather than creating a .lock file, you can just rename the zip file when a loader start to process a zip file. e.g. "foobar.zip.bar", the process should be faster than creating a new file on disk.
But it doesn't ensure your next loader will be loaded after the file rename. You should at least have some
controls loading new loaders in another script.
Also, just some side suggestion, its possible to emulate threading in PHP using CURL, you might want to try it out.
https://web.archive.org/web/20091014034235/http://www.ibuildings.co.uk/blog/archives/811-Multithreading-in-PHP-with-CURL.html
I do not know if I understand right, but I have a suggestion: get the lock files with a prefix of priority.
Example:
10-script.php started
20-script.php started (enters a loop waiting for a 10-foobar.ext.lock)
while 10-foobar.ext.lock is not generated by 10-script.php, still waiting
30-script.php will have to wait for 10-foobar.ext.lock and 20-example.ext.lock
I tried to find pcntl_fork with cygwin, but found nothing that works