Universal image loader - clear cache manually - java

I'm using Universal Image Loader to display images in my app in listviews. I'm using UnlimitedDiscCache since this is the fastest cache mechanism according to the documentation.
However, I would like to clear the disc cache when my app is closed (e.g. in onStop()) but only the oldest cached files that exceed a given limit should be deleted (like TotalSizeLimitedDiscCache does).
I am aware of ImageLoader.clearDiscCache() but in my case this clears the complete cache since I am using UnlimitedDiscCache before...
So I would like to have the fastest cache mechanism when the user is loading and scrolling the listviews and do the slow cache clear when the user is no longer interacting with the app.
Any ideas how I can achieve this?

check this from here https://stackoverflow.com/a/7763725/1023248 may help you..
#Override
protected void onDestroy() {
// closing Entire Application
android.os.Process.killProcess(android.os.Process.myPid());
Editor editor = getSharedPreferences("clear_cache", Context.MODE_PRIVATE).edit();
editor.clear();
editor.commit();
trimCache(this);
super.onDestroy();
}
public static void trimCache(Context context) {
try {
File dir = context.getCacheDir();
if (dir != null && dir.isDirectory()) {
deleteDir(dir);
}
} catch (Exception e) {
// TODO: handle exception
}
}
public static boolean deleteDir(File dir) {
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
}
// <uses-permission
// android:name="android.permission.CLEAR_APP_CACHE"></uses-permission>
// The directory is now empty so delete it
return dir.delete();
}

If you want to clear the cache from the memory, you can use the following code:
MemoryCacheUtils.removeFromCache(imageUri, ImageLoader.getInstance().getMemoryCache());
If you also want to clear the cache in the disk, you can use the following code:
DiscCacheUtils.removeFromCache(imageUri, ImageLoader.getInstance().getDiscCache());
This is a much simpler way of clearing the cache.
You can find similar answers in this thread:
How to force a cache clearing using Universal Image Loader Android?
I hope this information was helpful. :)

Related

How to put multiple assets in a data event for Android Development

I am trying to make an app that sends files from my Android Watch to my Android Phone.
The problem I have is that if I record and save multiple files and send all of them at the same time, I do not get all the files back on the phone side. I only receive one file.
The code for sending the file is as follows. This code is implemented on the Watch side.:
public void sendData(View v){
String fname = "_Activity.bin";
int FileCounterCopy = FileCounter;
if(mGoogleApiClient.isConnected()){
for (int i = 0; i < FileCounterCopy ; i++){
String FileName = String.valueOf(i) + fname;
File dataFile = new File(Environment.getExternalStorageDirectory(), FileName);
Log.i("Path", Environment.getExternalStorageDirectory().toString());
Log.i("file", dataFile.toString());
Asset dataAsset = createAssetfromBin(dataFile);
sensorData = PutDataMapRequest.create(SENSOR_DATA_PATH);
sensorData.getDataMap().putAsset("File", dataAsset);
PutDataRequest request = sensorData.asPutDataRequest();
Wearable.DataApi.putDataItem(mGoogleApiClient, request).setResultCallback(new ResultCallback<DataApi.DataItemResult>() {
#Override
public void onResult(DataApi.DataItemResult dataItemResult) {
Log.e("SENDING IMAGE WAS SUCCESSFUL: ", String.valueOf(dataItemResult.getStatus().isSuccess()));
}
});
boolean deleted = dataFile.delete();
Log.i("Deleted", String.valueOf(deleted));
FileCounter--;
}
mTextView.setText(String.valueOf(FileCounter));
Return();
}
else {
Log.d("Not", "Connecteddddddddd");
}
}
The code for receiving the files is as follows and is implemented on the phone side.
#Override
public void onDataChanged(DataEventBuffer dataEvents) {
Counter++;
final List<DataEvent> events = FreezableUtils.freezeIterable(dataEvents);
dataEvents.close();
Log.e("List Size: ", String.valueOf(events.size()));
for (DataEvent event : events) {
if (event.getType() == DataEvent.TYPE_CHANGED) {
Log.v("Data is changed", "========================");
String path = event.getDataItem().getUri().getPath();
if (SENSOR_DATA_PATH.equals(path)) {
DataMapItem dataMapItem = DataMapItem.fromDataItem(event.getDataItem());
fileAsset = dataMapItem.getDataMap().getAsset("File");
myRunnable = createRunnable();
if (checkSelfPermission(Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED)
new Thread(myRunnable).start();
}
}
}
status.setText("Received" + " File_"+ String.valueOf(Counter) );
}
Right before the for loop, I check the size of the event and it only shows a size of 1, no matter how many files I save.
I am stuck on how to implement this (tbh I used code from youtube video/online resources so I am not 100% sure on how some of the api works).
Thanks in advance!
You're putting all of the files at the same path, with nothing to differentiate them - so each one you put in overwrites the previous ones. The Data API works much like a filesystem in this regard.
In your sendData method, you need code something like this:
sensorData = PutDataMapRequest.create(SENSOR_DATA_PATH + '/' + dataFile.toString());
And then in onDataChanged, either only check the path prefix...
if (path.startsWith(SENSOR_DATA_PATH)) {
...or, preferably, put the value of SENSOR_DATA_PATH in your manifest declaration as an android:pathPrefix element in the intent-filter of your data receiver. You can then remove the path check from your Java code completely. Docs for that are here: https://developers.google.com/android/reference/com/google/android/gms/wearable/WearableListenerService
One other thing: it's good practice to clear stuff like these files out of the Data API when you're done using them, so that they're not taking up space there.

Netbeans module development - How to modify opened file

I am writing my own Netbeans plugin to edit opened files. I have managed to get some information about currently active file using
TopComponent activeTC = TopComponent.getRegistry().getActivated();
FileObject fo = activeTC.getLookup().lookup(FileObject.class);
io.getOut().println(fo.getNameExt());
io.getOut().println(fo.canWrite());
io.getOut().println(fo.asText());
But I have no idea how to modify this file. Can someone help me with this?
And second question, how to get text selection ranges? I want to run my command only on selected text.
For modifying the file you could use the NetBeans org.openide.filesystems.FileUtil.toFile() and then the regular Java stuff to read and write files and for getting the selected text of the current editor window you would have to do something like:
Node[] arr = activeTC.getActivatedNodes();
for (int j = 0; j < arr.length; j++) {
EditorCookie ec = (EditorCookie) arr[j].getCookie(EditorCookie.class);
if (ec != null) {
JEditorPane[] panes = ec.getOpenedPanes();
if (panes != null) {
// USE panes
}
}
}
For more code examples see also here
After several hours of research I found out that:
The code I posted in Question can be used to obtain basic information about active file.
To get caret position or get selection range you can do:
JTextComponent editor = EditorRegistry.lastFocusedComponent();
io.getOut().println("Caret pos: "+ editor.getCaretPosition());
io.getOut().println("Selection start: "+ editor.getSelectionStart());
io.getOut().println("Selection end: "+ editor.getSelectionEnd());
To modify content of active file (in a way that the modification can be undo by Ctrl+z) you may use this code:
final StyledDocument doc = context.openDocument();
NbDocument.runAtomicAsUser(doc, new Runnable() {
public void run() {
try {
doc.insertString(ofset, "New text.", SimpleAttributeSet.EMPTY);
} catch (Exception e) {
}
}
});

How to programatically clear application cache in Android 4.2.2 and above

In Android 4.0 I use this solution to clear my application cache and it works perfectly:
public void clearApplicationData()
{
File cache = getCacheDir();
File appDir = new File(cache.getParent());
if (appDir.exists()) {
String[] children = appDir.list();
for (String s : children) {
if (!s.equals("lib")) {
deleteDir(new File(appDir, s));
}
}
}
}
public static boolean deleteDir(File dir)
{
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
}
return dir.delete();
}
Unfortunately, this solution doesn't work in 4.2.2 Android version (and probably in above Android versions too). Anybody knows why? Maybe there is another method to clear cache?
Particulary I am interested in google map cache clearing and solution written above works for me in Android 4.0 but not in Android 4.2.2. Any help would be appreciated.
I don't get any errors in logcat. Device: Samsung Galaxy Tab 2 7.0'
I'm writing this as an answer because my comment will probably get buried. Even I had trouble clearing cache in a 4.2.2 device this code by David Wasser in this post worked for me.
PackageManager pm = getPackageManager();
// Get all methods on the PackageManager
Method[] methods = pm.getClass().getDeclaredMethods();
for (Method m : methods) {
if (m.getName().equals("freeStorage")) {
try {
long desiredFreeStorage = 8 * 1024 * 1024 * 1024;
m.invoke(pm, desiredFreeStorage , null);
} catch (Exception e) {
// Method invocation failed. Could be a permission problem
}
break;
}
}

Use jpathwatch to find out when a program finished writing to a file

I'm currently using jpathwatch to watch for new files created in a folder. All fine, but I need to find out when a program finished writing to a file.
The library's author describes on his website (http://jpathwatch.wordpress.com/faq/) how that's done but somehow I don't have a clue how to do that. Maybe it's described a bit unclear or I just don't get it.
I would like to ask whether you could give me a snippet which demonstrates how to do that.
This is the basic construct:
public void run() {
while (true) {
WatchKey signalledKey;
try {
signalledKey = watchService.take();
} catch (InterruptedException ix) {
continue;
} catch (ClosedWatchServiceException cwse) {
break;
}
List<WatchEvent<?>> list = signalledKey.pollEvents();
signalledKey.reset();
for (WatchEvent<?> e : list) {
if (e.kind() == StandardWatchEventKind.ENTRY_CREATE) {
Path context = (Path) e.context();
String filename = context.toString();
// do something
} else if (e.kind() == StandardWatchEventKind.ENTRY_DELETE) {
Path context = (Path) e.context();
String filename = context.toString();
// do something
} else if (e.kind() == StandardWatchEventKind.OVERFLOW) {
}
}
}
}
From the FAQ for jpathwatch, the author says that you will get an ENTRY_MODIFY event regularly when a file is being written and that event will stop being generated when the file writing is complete. He is suggesting that you keep a list of files and the time stamp for the last generated event for each file.
At some interval (which he refers to as a timeout), you scan through the list of files and their timestamps. If any file has a time stamp that is older than your timeout interval, then that should mean that it isn't being updated anymore and is probably complete.
He even suggests you try to determine the rate at a file is growing and calculate out when it should complete so that you can set your poll time to the expected completion duration.
Does that clear it up at all? Sorry I'm not up to expressing that in code :)

Is there a workaround for Java's poor performance on walking huge directories?

I am trying to process files one at a time that are stored over a network. Reading the files is fast due to buffering is not the issue. The problem I have is just listing the directories in a folder. I have at least 10k files per folder over many folders.
Performance is super slow since File.list() returns an array instead of an iterable. Java goes off and collects all the names in a folder and packs it into an array before returning.
The bug entry for this is http://bugs.sun.com/view_bug.do;jsessionid=db7fcf25bcce13541c4289edeb4?bug_id=4285834 and doesn't have a work around. They just say this has been fixed for JDK7.
A few questions:
Does anybody have a workaround to this performance bottleneck?
Am I trying to achieve the impossible? Is performance still going to be poor even if it just iterates over the directories?
Could I use the beta JDK7 builds that have this functionality without having to build my entire project on it?
Although it's not pretty, I solved this kind of problem once by piping the output of dir/ls to a file before starting my app, and passing in the filename.
If you needed to do it within the app, you could just use system.exec(), but it would create some nastiness.
You asked. The first form is going to be blazingly fast, the second should be pretty fast as well.
Be sure to do the one item per line (bare, no decoration, no graphics), full path and recurse options of your selected command.
EDIT:
30 minutes just to get a directory listing, wow.
It just struck me that if you use exec(), you can get it's stdout redirected into a pipe instead of writing it to a file.
If you did that, you should start getting the files immediately and be able to begin processing before the command has completed.
The interaction may actually slow things down, but maybe not--you might give it a try.
Wow, I just went to find the syntax of the .exec command for you and came across this, possibly exactly what you want (it lists a directory using exec and "ls" and pipes the result into your program for processing): good link in wayback (Jörg provided in a comment to replace this one from sun that Oracle broke)
Anyway, the idea is straightforward but getting the code right is annoying. I'll go steal some codes from the internets and hack them up--brb
/**
* Note: Only use this as a last resort! It's specific to windows and even
* at that it's not a good solution, but it should be fast.
*
* to use it, extend FileProcessor and call processFiles("...") with a list
* of options if you want them like /s... I highly recommend /b
*
* override processFile and it will be called once for each line of output.
*/
import java.io.*;
public abstract class FileProcessor
{
public void processFiles(String dirOptions)
{
Process theProcess = null;
BufferedReader inStream = null;
// call the Hello class
try
{
theProcess = Runtime.getRuntime().exec("cmd /c dir " + dirOptions);
}
catch(IOException e)
{
System.err.println("Error on exec() method");
e.printStackTrace();
}
// read from the called program's standard output stream
try
{
inStream = new BufferedReader(
new InputStreamReader( theProcess.getInputStream() ));
processFile(inStream.readLine());
}
catch(IOException e)
{
System.err.println("Error on inStream.readLine()");
e.printStackTrace();
}
} // end method
/** Override this method--it will be called once for each file */
public abstract void processFile(String filename);
} // end class
And thank you code donor at IBM
How about using File.list(FilenameFilter filter) method and implementing FilenameFilter.accept(File dir, String name) to process each file and return false.
I ran this on Linux vm for directory with 10K+ files and it took <10 seconds.
import java.io.File;
import java.io.FilenameFilter;
public class Temp {
private static void processFile(File dir, String name) {
File file = new File(dir, name);
System.out.println("processing file " + file.getName());
}
private static void forEachFile(File dir) {
String [] ignore = dir.list(new FilenameFilter() {
public boolean accept(File dir, String name) {
processFile(dir, name);
return false;
}
});
}
public static void main(String[] args) {
long before, after;
File dot = new File(".");
before = System.currentTimeMillis();
forEachFile(dot);
after = System.currentTimeMillis();
System.out.println("after call, delta is " + (after - before));
}
}
An alternative is to have the files served over a different protocol. As I understand you're using SMB for that and java is just trying to list them as a regular file.
The problem here might not be java alone ( how does it behaves when you open that directory with Microsoft Explorer x:\shared ) In my experience it also take a considerably amount of time.
You can change the protocol to something like HTTP, only to fetch the file names. This way you can retrieve the list of files over http ( 10k lines should't be too much ) and let the server deal with file listing. This would be very fast, since it will run with local resources ( those in the server )
Then when you have the list, you can process them one by exactly the way you're doing right now.
The keypoint is to have an aid mechanism in the other side of the node.
Is this feasible?
Today:
File [] content = new File("X:\\remote\\dir").listFiles();
for ( File f : content ) {
process( f );
}
Proposed:
String [] content = fetchViaHttpTheListNameOf("x:\\remote\\dir");
for ( String fileName : content ) {
process( new File( fileName ) );
}
The http server could be a very small small and simple file.
If this is the way you have it right now, what you're doing is to fetch all the 10k files information to your client machine ( I don't know how much of that info ) when you only need the file name for later processing.
If the processing is very fast right now it may be slowed down a bit. This is because the information prefetched is no longer available.
Give it a try.
A non-portable solution would be to make native calls to the operating system and stream the results.
For Linux
You can look at something like readdir. You can walk the directory structure like a linked list and return results in batches or individually.
For Windows
In windows the behavior would be fairly similar using FindFirstFile and FindNextFile apis.
I doubt the problem is relate to the bug report you referenced.
The issue there is "only" memory usage, but not necessarily speed.
If you have enough memory the bug is not relevant for your problem.
You should measure whether your problem is memory related or not. Turn on your Garbage Collector log and use for example gcviewer to analyze your memory usage.
I suspect that it has to do with the SMB protocol causing the problem.
You can try to write a test in another language and see if it's faster, or you can try to get the list of filenames through some other method, such as described here in another post.
If you need to eventually process all files, then having Iterable over String[] won't give you any advantage, as you'll still have to go and fetch the whole list of files.
If you're on Java 1.5 or 1.6, shelling out "dir" commands and parsing the standard output stream on Windows is a perfectly acceptable approach. I've used this approach in the past for processing network drives and it has generally been a lot faster than waiting for the native java.io.File listFiles() method to return.
Of course, a JNI call should be faster and potentially safer than shelling out "dir" commands. The following JNI code can be used to retrieve a list of files/directories using the Windows API. This function can be easily refactored into a new class so the caller can retrieve file paths incrementally (i.e. get one path at a time). For example, you can refactor the code so that FindFirstFileW is called in a constructor and have a seperate method to call FindNextFileW.
JNIEXPORT jstring JNICALL Java_javaxt_io_File_GetFiles(JNIEnv *env, jclass, jstring directory)
{
HANDLE hFind;
try {
//Convert jstring to wstring
const jchar *_directory = env->GetStringChars(directory, 0);
jsize x = env->GetStringLength(directory);
wstring path; //L"C:\\temp\\*";
path.assign(_directory, _directory + x);
env->ReleaseStringChars(directory, _directory);
if (x<2){
jclass exceptionClass = env->FindClass("java/lang/Exception");
env->ThrowNew(exceptionClass, "Invalid path, less than 2 characters long.");
}
wstringstream ss;
BOOL bContinue = TRUE;
WIN32_FIND_DATAW data;
hFind = FindFirstFileW(path.c_str(), &data);
if (INVALID_HANDLE_VALUE == hFind){
jclass exceptionClass = env->FindClass("java/lang/Exception");
env->ThrowNew(exceptionClass, "FindFirstFileW returned invalid handle.");
}
//HANDLE hStdOut = GetStdHandle(STD_OUTPUT_HANDLE);
//DWORD dwBytesWritten;
// If we have no error, loop thru the files in this dir
while (hFind && bContinue){
/*
//Debug Print Statment. DO NOT DELETE! cout and wcout do not print unicode correctly.
WriteConsole(hStdOut, data.cFileName, (DWORD)_tcslen(data.cFileName), &dwBytesWritten, NULL);
WriteConsole(hStdOut, L"\n", 1, &dwBytesWritten, NULL);
*/
//Check if this entry is a directory
if (data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY){
// Make sure this dir is not . or ..
if (wstring(data.cFileName) != L"." &&
wstring(data.cFileName) != L"..")
{
ss << wstring(data.cFileName) << L"\\" << L"\n";
}
}
else{
ss << wstring(data.cFileName) << L"\n";
}
bContinue = FindNextFileW(hFind, &data);
}
FindClose(hFind); // Free the dir structure
wstring cstr = ss.str();
int len = cstr.size();
//WriteConsole(hStdOut, cstr.c_str(), len, &dwBytesWritten, NULL);
//WriteConsole(hStdOut, L"\n", 1, &dwBytesWritten, NULL);
jchar* raw = new jchar[len];
memcpy(raw, cstr.c_str(), len*sizeof(wchar_t));
jstring result = env->NewString(raw, len);
delete[] raw;
return result;
}
catch(...){
FindClose(hFind);
jclass exceptionClass = env->FindClass("java/lang/Exception");
env->ThrowNew(exceptionClass, "Exception occured.");
}
return NULL;
}
Credit:
https://sites.google.com/site/jozsefbekes/Home/windows-programming/miscellaneous-functions
Even with this approach, there are still efficiencies to be gained. If you serialize the path to a java.io.File, there is a huge performance hit - especially if the path represents a file on a network drive. I have no idea what Sun/Oracle is doing under the hood but if you need additional file attributes other than the file path (e.g. size, mod date, etc), I have found that the following JNI function is much faster than instantiating a java.io.File object on a network the path.
JNIEXPORT jlongArray JNICALL Java_javaxt_io_File_GetFileAttributesEx(JNIEnv *env, jclass, jstring filename)
{
//Convert jstring to wstring
const jchar *_filename = env->GetStringChars(filename, 0);
jsize len = env->GetStringLength(filename);
wstring path;
path.assign(_filename, _filename + len);
env->ReleaseStringChars(filename, _filename);
//Get attributes
WIN32_FILE_ATTRIBUTE_DATA fileAttrs;
BOOL result = GetFileAttributesExW(path.c_str(), GetFileExInfoStandard, &fileAttrs);
if (!result) {
jclass exceptionClass = env->FindClass("java/lang/Exception");
env->ThrowNew(exceptionClass, "Exception Occurred");
}
//Create an array to store the WIN32_FILE_ATTRIBUTE_DATA
jlong buffer[6];
buffer[0] = fileAttrs.dwFileAttributes;
buffer[1] = date2int(fileAttrs.ftCreationTime);
buffer[2] = date2int(fileAttrs.ftLastAccessTime);
buffer[3] = date2int(fileAttrs.ftLastWriteTime);
buffer[4] = fileAttrs.nFileSizeHigh;
buffer[5] = fileAttrs.nFileSizeLow;
jlongArray jLongArray = env->NewLongArray(6);
env->SetLongArrayRegion(jLongArray, 0, 6, buffer);
return jLongArray;
}
You can find a full working example of this JNI-based approach in the javaxt-core library. In my tests using Java 1.6.0_38 with a Windows host hitting a Windows share, I have found this JNI approach approximately 10x faster then calling java.io.File listFiles() or shelling out "dir" commands.
I wonder why there are 10k files in a directory. Some file systems do not work well with so many files. There are specifics limitations for file systems like max amount of files per directory and max amount of levels of subdirectory.
I solve a similar problem with an iterator solution.
I needed to walk across huge directorys and several levels of directory tree recursively.
I try FileUtils.iterateFiles() of Apache commons io. But it implement the iterator by adding all the files in a List and then returning List.iterator(). It's very bad for memory.
So I prefer to write something like this:
private static class SequentialIterator implements Iterator<File> {
private DirectoryStack dir = null;
private File current = null;
private long limit;
private FileFilter filter = null;
public SequentialIterator(String path, long limit, FileFilter ff) {
current = new File(path);
this.limit = limit;
filter = ff;
dir = DirectoryStack.getNewStack(current);
}
public boolean hasNext() {
while(walkOver());
return isMore && (limit > count || limit < 0) && dir.getCurrent() != null;
}
private long count = 0;
public File next() {
File aux = dir.getCurrent();
dir.advancePostition();
count++;
return aux;
}
private boolean walkOver() {
if (dir.isOutOfDirListRange()) {
if (dir.isCantGoParent()) {
isMore = false;
return false;
} else {
dir.goToParent();
dir.advancePostition();
return true;
}
} else {
if (dir.isCurrentDirectory()) {
if (dir.isDirectoryEmpty()) {
dir.advancePostition();
} else {
dir.goIntoDir();
}
return true;
} else {
if (filter.accept(dir.getCurrent())) {
return false;
} else {
dir.advancePostition();
return true;
}
}
}
}
private boolean isMore = true;
public void remove() {
throw new UnsupportedOperationException();
}
}
Note that the iterator stop by an amount of files iterateds and it has a FileFilter also.
And DirectoryStack is:
public class DirectoryStack {
private class Element{
private File files[] = null;
private int currentPointer;
public Element(File current) {
currentPointer = 0;
if (current.exists()) {
if(current.isDirectory()){
files = current.listFiles();
Set<File> set = new TreeSet<File>();
for (int i = 0; i < files.length; i++) {
File file = files[i];
set.add(file);
}
set.toArray(files);
}else{
throw new IllegalArgumentException("File current must be directory");
}
} else {
throw new IllegalArgumentException("File current not exist");
}
}
public String toString(){
return "current="+getCurrent().toString();
}
public int getCurrentPointer() {
return currentPointer;
}
public void setCurrentPointer(int currentPointer) {
this.currentPointer = currentPointer;
}
public File[] getFiles() {
return files;
}
public File getCurrent(){
File ret = null;
try{
ret = getFiles()[getCurrentPointer()];
}catch (Exception e){
}
return ret;
}
public boolean isDirectoryEmpty(){
return !(getFiles().length>0);
}
public Element advancePointer(){
setCurrentPointer(getCurrentPointer()+1);
return this;
}
}
private DirectoryStack(File first){
getStack().push(new Element(first));
}
public static DirectoryStack getNewStack(File first){
return new DirectoryStack(first);
}
public String toString(){
String ret = "stack:\n";
int i = 0;
for (Element elem : stack) {
ret += "nivel " + i++ + elem.toString()+"\n";
}
return ret;
}
private Stack<Element> stack=null;
private Stack<Element> getStack(){
if(stack==null){
stack = new Stack<Element>();
}
return stack;
}
public File getCurrent(){
return getStack().peek().getCurrent();
}
public boolean isDirectoryEmpty(){
return getStack().peek().isDirectoryEmpty();
}
public DirectoryStack downLevel(){
getStack().pop();
return this;
}
public DirectoryStack goToParent(){
return downLevel();
}
public DirectoryStack goIntoDir(){
return upLevel();
}
public DirectoryStack upLevel(){
if(isCurrentNotNull())
getStack().push(new Element(getCurrent()));
return this;
}
public DirectoryStack advancePostition(){
getStack().peek().advancePointer();
return this;
}
public File[] peekDirectory(){
return getStack().peek().getFiles();
}
public boolean isLastFileOfDirectory(){
return getStack().peek().getFiles().length <= getStack().peek().getCurrentPointer();
}
public boolean gotMoreLevels() {
return getStack().size()>0;
}
public boolean gotMoreInCurrentLevel() {
return getStack().peek().getFiles().length > getStack().peek().getCurrentPointer()+1;
}
public boolean isRoot() {
return !(getStack().size()>1);
}
public boolean isCurrentNotNull() {
if(!getStack().isEmpty()){
int currentPointer = getStack().peek().getCurrentPointer();
int maxFiles = getStack().peek().getFiles().length;
return currentPointer < maxFiles;
}else{
return false;
}
}
public boolean isCurrentDirectory() {
return getStack().peek().getCurrent().isDirectory();
}
public boolean isLastFromDirList() {
return getStack().peek().getCurrentPointer() == (getStack().peek().getFiles().length-1);
}
public boolean isCantGoParent() {
return !(getStack().size()>1);
}
public boolean isOutOfDirListRange() {
return getStack().peek().getFiles().length <= getStack().peek().getCurrentPointer();
}
}
Using an Iterable doesn't imply that the Files will be streamed to you. In fact its usually the opposite. So an array is typically faster than an Iterable.
Are you sure it's due to Java, not just a general problem with having 10k entries in one directory, particularly over the network?
Have you tried writing a proof-of-concept program to do the same thing in C using the win32 findfirst/findnext functions to see whether it's any faster?
I don't know the ins and outs of SMB, but I strongly suspect that it needs a round trip for every file in the list - which is not going to be fast, particularly over a network with moderate latency.
Having 10k strings in an array sounds like something which should not tax the modern Java VM too much either.

Categories

Resources