I'm developing plugin for Intellij Idea and doing some tests. In one of my tests I need persisting state to be updated. But it doesn't happens and neither loadState nor getState is called.
I wrote class that implements PersistingStateComponent (i think I did it correctly because it's not the first time I did it). In one test I call method that adds data to State class, and it's successfully added but not saved. Another one should get that data, but it gets empty State. Test class implements LightPlatformTestCase.
Documentation says :
Persistent Component Lifecycle The loadState() method is called after
the component has been created (only if there is some non-default
state persisted for the component), and after the XML file with the
persisted state is changed externally (for example, if the project
file was updated from the version control system). In the latter case,
the component is responsible for updating the UI and other related
components according to the changed state.
The getState() method is called every time the settings are saved (for
example, on frame deactivation or when closing the IDE). If the state
returned from getState() is equal to the default state (obtained by
creating the state class with a default constructor), nothing is
persisted in the XML. Otherwise, the returned state is serialized in
XML and stored.
So is it possible that none of these conditions happen?
Can I do something in test method to update my persisting state?
Or it supposed to work and I should look for issue in my code?
Update: When I run plugin it works fine.
My class looks like that:
#State(name = "MyStateName", storages = {#Storage(id="MyStateId", file = "D:/MyStateName.xml")})
public class MyClass
implements PersistentStateComponent<MyClass.State> {
public static class State{
Integer someValue = 10;
}
State myState = new State();
public State getState() {
return myState;
}
public void loadState(State state) {
myState = state;
}
public SFApexClassWrapp getValue() {
return myState.value;
}
public void addValue(Integer value) {
myState.value = value;
}
}
When you're using PersistentStateComponent, your only responsibility as a plugin developer is to return an instance of the State class. Therefore, the only thing that it makes sense to test for you as a plugin developer is that your instance is returned correctly. You don't need to test the XML serialization, because it's part of IntelliJ IDEA's code, which is not your responsibility and is already reasonably well-tested.
That's why, when IntelliJ IDEA runs tests for your code, it does not save or load settings for your components. You can trigger the settings saving manually if you need, but in general it's not necessary for you to do that.
Please post a separate question regarding storing multiple instances of your configuration objects.
Related
I want to publish an event if and only if there were changes to the DB. I'm running under #Transaction is Spring context and I come up with this check:
Session session = entityManager.unwrap(Session.class);
session.isDirty();
That seems to fail for new (Transient) objects:
#Transactional
public Entity save(Entity newEntity) {
Entity entity = entityRepository.save(newEntity);
Session session = entityManager.unwrap(Session.class);
session.isDirty(); // <-- returns `false` ):
return entity;
}
Based on the answer here https://stackoverflow.com/a/5268617/672689 I would expect it to work and return true.
What am I missing?
UPDATE
Considering #fladdimir answer, although this function is called in a transaction context, I did add the #Transactional (from org.springframework.transaction.annotation) on the function. but I still encounter the same behaviour. The isDirty is returning false.
Moreover, as expected, the new entity doesn't shows on the DB while the program is hold on breakpoint at the line of the session.isDirty().
UPDATE_2
I also tried to change the session flush modes before calling the repo save, also without any effect:
session.setFlushMode(FlushModeType.COMMIT);
session.setHibernateFlushMode(FlushMode.MANUAL);
First of all, Session.isDirty() has a different meaning than what I understood. It tells if the current session is holding in memory queries which still haven't been sent to the DB. While I thought it tells if the transaction have changing queries. When saving a new entity, even in transaction, the insert query must be sent to the DB in order to get the new entity id, therefore the isDirty() will always be false after it.
So I ended up creating a class to extend SessionImpl and hold the change status for the session, updating it on persist and merge calls (the functions hibernate is using)
So this is the class I wrote:
import org.hibernate.HibernateException;
import org.hibernate.internal.SessionCreationOptions;
import org.hibernate.internal.SessionFactoryImpl;
import org.hibernate.internal.SessionImpl;
public class CustomSession extends SessionImpl {
private boolean changed;
public CustomSession(SessionFactoryImpl factory, SessionCreationOptions options) {
super(factory, options);
changed = false;
}
#Override
public void persist(Object object) throws HibernateException {
super.persist(object);
changed = true;
}
#Override
public void flush() throws HibernateException {
changed = changed || isDirty();
super.flush();
}
public boolean isChanged() {
return changed || isDirty();
}
}
In order to use it I had to:
extend SessionFactoryImpl.SessionBuilderImpl to override the openSession function and return my CustomSession
extend SessionFactoryImpl to override the withOptions function to return the extended SessionFactoryImpl.SessionBuilderImpl
extend AbstractDelegatingSessionFactoryBuilderImplementor to override the build function to return the extended SessionFactoryImpl
implement SessionFactoryBuilderFactory to implement getSessionFactoryBuilder to return the extended AbstractDelegatingSessionFactoryBuilderImplementor
add org.hibernate.boot.spi.SessionFactoryBuilderFactory file under META-INF/services with value of my SessionFactoryBuilderFactory implementation full class name (for the spring to be aware of it).
UPDATE
There was a bug with capturing the "merge" calls (as tremendous7 comment), so I end up capturing the isDirty state before any flush, and also checking it once more when checking isChanged()
The following is a different way you might be able to leverage to track dirtiness.
Though architecturally different than your sample code, it may be more to the point of your actual goal (I want to publish an event if and only if there were changes to the DB).
Maybe you could use an Interceptor listener to let the entity manager do the heavy lifting and just TELL you what's dirty. Then you only have to react to it, instead of prod it to sort out what's dirty in the first place.
Take a look at this article: https://www.baeldung.com/hibernate-entity-lifecycle
It has a lot of test cases that basically check for dirtiness of objects being saved in various contexts and then it relies on a piece of code called the DirtyDataInspector that effectively listens to any items that are flagged dirty on flush and then just remembers them (i.e. keeps them in a list) so the unit test cases can assert that the things that SHOULD have been dirty were actually flushed as dirty.
The dirty data inspector code is on their github. Here's the direct link for ease of access.
Here is the code where the interceptor is applied to the factory so it can be effective. You might need to write this up in your injection framework accordingly.
The code for the Interceptor it is based on has a TON of lifecycle methods you can probably exploit to get the perfect behavior for "do this if there was actually a dirty save that occured".
You can see the full docs of it here.
We do not know your complete setup, but as #Christian Beikov suggested in the comment, is it possible that the insertion was already flushed before you call isDirty()?
This would happen when you called repository.save(newEntity) without a running transaction, since the SimpleJpaRepository's save method is annotated itself with #Transactional:
#Transactional
#Override
public <S extends T> S save(S entity) {
...
}
This will wrap the call in a new transaction if none is already active, and flush the insertion to the DB at the end of the transaction just before the method returns.
You might choose to annotate the method where you call save and isDirty with #Transactional, so that the transaction is created when your method is called, and propagated to the repository call. This way the transaction would not be committed when the save returns, and the session would still be dirty.
(edit, just for completeness: in case of using an identity ID generation strategy, the insertion of newly created entity is flushed during a repository's save call to generate the ID, before the running transaction is committed)
I am using the PostContextCreate part of the life cycle in an e4 RCP application to create the back-end "business logic" part of my application. I then inject it into the context using an IEclipseContext. I now have a requirement to persist some business logic configuration options between executions of my application. I have some questions:
It looks like properties (e.g. accessible from MContext) would be really useful here, a straightforward Map<String,String> sounds ideal for my simple requirements, but how can I get them in PostContextCreate?
Will my properties persist if my application is being run with clearPersistedState set to true? (I'm guessing not).
If I turn clearPersistedState off then will it try and persist the other stuff that I injected into the context?
Or am I going about this all wrong? Any suggestions would be welcome. I may just give up and read/write my own properties file.
I think the Map returned by MApplicationElement.getPersistedState() is intended to be used for persistent data. This will be cleared by -clearPersistedState.
The PostContextCreate method of the life cycle is run quite early in the startup and not everything is available at this point. So you might have to wait for the app startup complete event (UIEvents.UILifeCycle.APP_STARTUP_COMPLETE) before accessing the persisted state data.
You can always use the traditional Platform.getStateLocation(bundle) to get a location in the workspace .metadata to store arbitrary data. This is not touched by clearPersistedState.
Update:
To subscribe to the app startup complete:
#PostContextCreate
public void postContextCreate(IEventBroker eventBroker)
{
eventBroker.subscribe(UIEvents.UILifeCycle.APP_STARTUP_COMPLETE, new AppStartupCompleteEventHandler());
}
private static final class AppStartupCompleteEventHandler implements EventHandler
{
#Override
public void handleEvent(final Event event)
{
... your code here
}
}
So say I lookup an object from the repository. If I save this object immediately after lookup, Spring Data is smart enough not to update the database. If I change a property within this object and then save, spring data does an update. How does it know it needs to do an update or not?
This is not provided by Spring Data, its a feature of your persistence framework (hibernate, openjpa, eclipselink,...).
Persistence providers enhance the domain objects with some "stuff" for optimization. Normally, this is done by so called runtime enhancement, so your class gets loaded inside of the application and enhanced there(runtime weaving).
Openjpa also allows build-time-enhancement, which means, the "openjpa-domain-extension-stuff" becomes added to your entities at compile time. (there is a maven goal in the openjpa plugin too)
https://openjpa.apache.org/builds/2.2.2/apache-openjpa/docs/ref_guide_pc_enhance.html
If you run mvn openjpa:enhance your simple domain will look now like the following:
(I used jad to decompile the class, as it is to long to show all stuff inside, I copied the most relevant parts)
import org.apache.openjpa.enhance.*;
import org.apache.openjpa.util.IntId;
import org.apache.openjpa.util.InternalException;
public class Entity implements PersistenceCapable
{
public Integer getId()
{
return pcGetid(this);
}
public void setId(Integer id)
{
pcSetid(this, id);
}
....
....
private static final void pcSetid(Entity entity, Integer integer)
{
if(entity.pcStateManager == null)
{
entity.id = integer;
return;
} else
{
entity.pcStateManager.settingObjectField(entity, pcInheritedFieldCount + 3, entity.id, integer, 0);
return;
}
}
....
protected void pcClearFields()
{
id = null;
}
public PersistenceCapable pcNewInstance(StateManager statemanager, Object obj, boolean flag)
{
Entity entity = new Entity();
if(flag)
entity.pcClearFields();
entity.pcStateManager = statemanager;
entity.pcCopyKeyFieldsFromObjectId(obj);
return entity;
}
}
By manipulating your entity, the pcStateManager gets invoked. If you run a persist operation, the persistence framework checks the statemanager if there are changes within your entity and sends the update to the database if necessary.
Spring doesn't actually work directly on instances of your class. What it does is create a proxy that wraps the actual instance and delegates to it. This proxy holds the state of persistence of the underlying instance. In other words, it knows if the instance is in the same state as it is in the database as it is in memory.
If you invoke (certain) methods, it will consider itself dirty. The EntityManager will have to push those changes. If you don't, then it also knows that no changes need to be pushed.
I have been wrestling with this problem for a while. I would like to use the same Stripes ActionBean for show and update actions. However, I have not been able to figure out how to do this in a clean way that allows reliable binding, validation, and verification of object ownership by the current user.
For example, lets say our action bean takes a postingId. The posting belongs to a user, which is logged in. We might have something like this:
#UrlBinding("/posting/{postingId}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
Now, for the show action, we could define:
private int postingId; // assume the parameter in #UrlBinding above was renamed
private Posting posting;
And now use #After(stages = LifecycleStage.BindingAndValidation) to fetch the Posting. Our #After function can verify that the currently logged in user owns the posting. We must use #After, not #Before, because the postingId won't have been bound to the parameter before hand.
However, for an update function, you want to bind the Posting object to the Posting variable using #Before, not #After, so that the returned form entries get applied on top of the existing Posting object, instead of onto an empty stub.
A custom TypeConverter<T> would work well here, but because the session isn't available from the TypeConverter interface, its difficult to validate ownership of the object during binding.
The only solution I can see is to use two separate action beans, one for show, and one for update. If you do this however, the <stripes:form> tag and its downstream tags won't correctly populate the values of the form, because the beanclass or action tags must map back to the same ActionBean.
As far as I can see, the Stripes model only holds together when manipulating simple (none POJO) parameters. In any other case, you seem to run into a catch-22 of binding your object from your data store and overwriting it with updates sent from the client.
I've got to be missing something. What is the best practice from experienced Stripes users?
In my opinion, authorisation is orthogonal to object hydration. By this, I mean that you should separate the concerns of object hydration (in this case, using a postingId and turning it into a Posting) away from determining whether a user has authorisation to perform operations on that object (like show, update, delete, etc.,).
For object hydration, I use a TypeConverter<T>, and I hydrate the object without regard to the session user. Then inside my ActionBean I have a guard around the setter, thus...
public void setPosting(Posting posting) {
if (accessible(posting)) this.posting = posting;
}
where accessible(posting) looks something like this...
private boolean accessible(Posting posting) {
return authorisationChecker.isAuthorised(whoAmI(), posting);
}
Then your show() event method would look like this...
public Resolution show() {
if (posting == null) return NOT_FOUND;
return new ForwardResolution("/WEB-INF/jsp/posting.jsp");
}
Separately, when I use Stripes I often have multiple events (like "show", or "update") within the same Stripes ActionBean. For me it makes sense to group operations (verbs) around a related noun.
Using clean URLs, your ActionBean annotations would look like this...
#UrlBinding("/posting/{$event}/{posting}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
...where {$event} is the name of your event method (i.e. "show" or "update"). Note that I am using {posting}, and not {postingId}.
For completeness, here is what your update() event method might look like...
public Resolution update() {
if (posting == null) throw new UnauthorisedAccessException();
postingService.saveOrUpdate(posting);
message("posting.save.confirmation");
return new RedirectResolution(PostingsAction.class);
}
Recently I've had some problems with people cheating using an app for root users called Gamecih. Gamecih let's users pause games and change variables in runtime.
If I obfuscate my code I though it'll be hard for cheaters to know what variables to change in runtime, but I'm also worried it might cause some other problems.
I serialize game objects using Javas Serializable interface and then write them out to a file. Now let's say I'm serializing an object of the class "Player". It gets serialized and saved to a file. Then a user downloads the update with the Proguard implementation. Proguard will rename classes and class member names. Won't that cause major errors when trying to read in an already saved Player object?
If I had not yet launched my game, this wouldn't be a problem. But now some players are playing on the same saved game(it's an RPG) for months. They would be pretty pissed off if they downloaded an update and had to start over from scratch.
I know I can instruct Proguard not to obfuscate certain classes, but it's the Player class I really need to obfuscate.
Clarification: Let's say I have the following simple unobfuscated class:
public class Player {
private int gold;
private String name;
//Lots more.
public Player(String name)
{
this.name = name;
}
public int getGold() {
return gold;
}
public void setGold(int gold) {
this.gold = gold;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
A player is created, serialized and saved to a file. After I implement obfuscator, it might look like this:
public class Axynasf {
private int akdmakn;
private String anxcmjna;
public Axynasf(String adna)
{
anxcmjna=adna;
}
public int getAkdmakn() {
return akdmakn;
}
public void setAkdmakn(int akdmakn) {
this.akdmakn = akdmakn;
}
public String getAnxcmjna() {
return anxcmjna;
}
public void setAnxcmjna(String anxcmjna) {
this.anxcmjna = anxcmjna;
}
}
Imagine that I post an update now and a player that has an unobfuscated version of Player downloads that update. When trying to read that Object there will be different member names and a different class name. I'll most likely get ClassCastException or something of the sorts.
No expert in Proguard, but I think you're right to assume it is going to break serialisation.
One possible way of solving this might be to implement a layer over your current save structure - You can tell Proguard which classes you don't want to obfuscate. Leave the Player (and alike objects) the same for now and don't obfuscate. Once the object has been de-serialised, pass it up to your new layer (which is obfuscated) which the rest of the game deals with - if you don't retain the non-obfuscated object, then it'll cause cheaters problems tweaking during game play (although not at load time). At the same time, you could look at moving your player's game files over to another save option that doesn't depend on serialisation, which will probably make such issues easier in the future.
For ensuring compatible serialization in ProGuard:
ProGuard manual > Examples > Processing serializable classes
For upgrading a serialized class to a different class in Java:
JDK documentation > Serialization > Object Input Classes > readResolve
JDK documentation > Serialization > Object Serialization Examples > Evolution/Substitution
I understand ppl can update vars # runtime w/ the app you named.
If you change the member names, the values will still give hints.
If you obfuscate, the class name will change but new name will end on a forum anyway.
So this is not enough
What you could do in your update is, at startup, load serialized data in old object, transfer to "new" obfuscated class, use a custom serialization (with an XOR using the deviceID value or the gmail adress so to make it less obvious).
Try to have your player data into several classes too.
What I would do in your situation:
Release an update with obfuscated and non-obfuscated class. When player gets loaded it will try with both classes. If player got loaded with non-obf. class then map this class to your obfuscated class.
When player gets saved it will save with obfuscated class.
After a proper amount of time release an update with only the obfuscated classes.