Need to remove cache with Android Studio - java

I am developing an application with Android Studio, but I have a problem. I must always delete the cache of the IDE (Invalidate/Caches restart) for each compilation.
How do you configure your IDE for avoid this problem ?
Thank in advance,

It looks like that the actual problem is why you have to clean the cache on every compilation. You don't need to clean the cache on every compilation.
However, the current version of the doc tells that the cache can become overloaded and cause problems:
IntelliJ IDEA caches a great number of files, therefore the system
cache may one day become overloaded. In certain situations the caches
will never be needed again, for example, if you work with frequent
short-term projects. Also, the only way to solve some conflicts is to
clean out the cache.
PS At the same time, #yole tells that There is no such thing as the IntelliJ IDEA cache being overloaded.

Related

Intellij Idea Community Constantly Freezes and Maven projects stuck on reading pom.xml for a very long time

I have just installed Intellij Idea Community on my work computer (virtual machine) and it constantly freezes for more than a minute at times every few minutes. Additionally, when I can finally do some work and load, and when I open a maven project, the reading pom.xml stage can take 20-30 minutes. This also happens anytime I make changes to my pom.xml file.
I read a bit about the vm options but couldn't understand enough of it to make any changes.
Google Drive Link to idea logs
Check your intellij memory settings. Increase it if it is low.
Go to Help -> click on Change Memory Settings
In Popup increase the memory.
Click Save and Restart
I fixed the maven problem by setting a proxy in intellij. This did not occur to me at first as none of the other IDEs that we use have proxies set.
As for the constant freezing, a majority of it was caused due to the maven scans and since I dont have that problem anymore, I don't need to worry too much about the freezes either

Eclipse says I have errors every day, but I don't

This seems to be a problem with the on-the-fly code parser. As I am typing something, eclipse (latest release version) might update to reflect that I have an error. But when I finish typing the line, the error will still remain in the Problems tab and never get fixed, no matter what changes I make to that file or any other file. The only way to fix this problem is to go to Project->Clean... which of course works every time.
I use a rapid prototyping technique where I need to test my project hundreds of times throughout a day. This error happens about 10% of the time I want to run or debug a project, which is 10 times every day. Cleaning my project takes a significant number of seconds since we have hundreds of resources that need to be re-parsed. This is very frustrating and is killing my productivity. Is there any fix or patch to eclipse to fix this problem? If not, does anyone know of a workaround?
Right click your project and hit refresh. It usually works and is much faster than a full clean and build.
can you please expand the errors ?
I guess this must be due to linking error..
ensure all the required jar are added, that should resolve.

99% vs 1% in terms of code compilation or build

This is in terms of code compilation and nothing else.. :)
So, I am a newbie in my company and predictably got stuck with an awesomely slow computer. And I am having a big problem with my Netbeans running out of memory/resource every time I make a build. I am compiling my JAVA files.
I was using 7.0, and even though I was getting this error, I got by it by compiling the source packages in chunks. (sometimes I had to compile the selected ones more than once)
But ever since I moved to 7.2, this problem is getting worse. I have to now compile the packages in even more smaller chunks. Sometimes package by package and file by file. Hence costing me a lot of time and even lot of hair.
I have no idea which packages to compile first. The netbeans was taking care of that. Therefore, taking resources.
Most of my colleagues have powerful computers and have no problem building the whole source base. So, I started getting the complied packages and only building the required ones.
So, is this the correct approach or building the whole source (even though I just make changes to 1% of the total code base, at any given time)?
Almost everyone in this company is building the whole code base, at least once, even though most of the changes are only in 1%.
It is far better to build the entire project and have it work as designed, then build 99% of it and it doesn't work. There's no indication that the 1% is critical or non-critical code, and as a beginner, you can't tell that just right off the bat.
I would inform your teammates/IT personnel about the slow build and ask what can be done to resolve it, instead of building the code in chunks.
Maybe you should highlight the issues with a developer having a slow machine impeding the work you are doing, when you explain the difference in lost productivity versus hardware cost, you will shortly have a new machine.
Then you can stop worrying about building "99%" and get on to real issues.
It's better to build the entire project. Try tune netbeans.conf
netbeans_default_options="-J-client -J-Xss4m -J-Xms128m -J-XX:PermSize=128m -J-XX:MaxPermSize=512m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true-J-XX:+UseParNewGC -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled"
So, is this the correct approach or building the whole source (even
though I just make changes to 1% of the total code base, at any given
time)?
I think that you can build only some parts of project only if you perfectly know all internal dependencies and can guaranty that no unexpected behaviour in nearby module happens after your modifications were made. It is my opinion. Moreover, you can change code and compile it succesful, but the entire project build can fail the same.
P.S. You should get company to buy you a new computer.
In theory you could walk through all dependencies and make yourself a dependency hierarchy map, and you should only have to compile the code you've changed plus everything that depends on it. However it's not necessarily 100% foolproof and requires A LOT of effort for very little gain. It's not something that I would expect to be a newbies responsibility to sort out, rather your superiors should get you sorted out with some appropriate kit.

IntelliJ IDEA freezing on source code completion

I'm using IntelliJ IDEA 10.0 for Java development. A few days ago it started to reveal a strange behavior with auto-completion: pop-ups with completion options appears as usual,
but IDEA completely freezes after choosing an option.
Cache cleaning doesn't help.
Has anyone else encountered this?
Update: Another symptom: IDEA freezes when trying to auto-implement method (e.g. toString)
This is may be due to garbage collector working hard.
Try give your IDE more memory. You can do it in idea.exe.vmoptions(if you use windows). Increase -xmx property to at least 512 MB.
This may not be the same issue you describe, but I have experienced long (but not eternal) freezes, where after a minute or two it came back to respond. This happened whenever I pressed Ctrl+Alt+Space in the code completion popup, which caused IDEA to load all project and external libraries to browse for possible completion options.

Trying to cause java.lang.OutOfMemoryException

I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.

Categories

Resources