using Sessions with DynamoDB - java

So I have a maven / spring application running on tomcat 8. I'm playing around with storing the sessions in dynmao db. There are a few reasons why I want to do this but i'll spare you the details.
I've been following this guide pretty religiously https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-tomcat-session-manager.html#java-dg-tomcat-sess-manage-with-ddb but my data does not seem to be being sent to the dynamoDB table I set up.
So what i've done.
First I downloaded this jar:
aws-dynamodb-session-tomcat-2.0.4.jar
and moved it to my lib folder.
Then I set up my context.xml like:
className="com.amazonaws.services.dynamodb.sessionmanager.DynamoDBSessionManager"
awsAccessKey="mykey"
awsSecretKey="mysecertKey"
regionId="us-east-1"
createIfNotExist="true" />
These apps are on EC2 instances so I skiped the ECB step. Next I set up a DBB table that looks like:
Table name Tomcat_SessionState
Primary partition key sessionId (String)
But when I restart my app and try and login I don't see anything geting posted there..
I've been tailing my catalina.out but no luck there either. Another note on this I don't see anything about DBB in my catalina.out strange.
Am I missing a common step here?
UPDATE:
When I start my app it creates the needed table. Just can't seem to get it to send the session id's out there. I wonder if a code change needs to be made to support this feature? I thought it supported any forum of sessions.
Edited by: dennis93 on Mar 8, 2018 2:13 PM
I see something like this in my log:
dynamo-session-manager-expired-sesion-reaper

Dynamo DB Tomcat Session Management Support is dropped. Ref: https://forums.aws.amazon.com/thread.jspa?threadID=275425

When I was experimenting with the AWS DynamoDB session manager, I experienced an unexplainable effect where writes of session data into DynamoDB would ONLY occur if the Manager is declared inside the global context.xml, i.e. within $CATALINA_HOME/conf/context.xml
The data has to be written to your DynamoDB table in order to persist across tomcat process restarts.

Related

WASCE data source encryption and integrity configuration

I have created a database pool on WASCE 3.0.0.3 (WebSphere Application Server Community Edition) which i am using through JNDI. I want to set oracle network data encryption and integrity properties for this database pool. The properties i want to set in particular are oracle.net.encryption_client and oracle.net.encryption_types_client.
How can I set these properties? I do not see any option to set these properties while creating the connection pool and I cannot find any documentation related to the same.
You probably cannot find any documentation on how to do this because WAS 3.0 went out of service in 2003, so any documentation for it is long gone.
If you upgrade to a newer version of WAS traditional (or Liberty), you will find much more documentation and people willing to help you. Additionally, in WAS 6.1 an admin console (UI) was added, which will probably walk you through what you are trying to do.

Issue when deploying application to GlassFish Server - mapping issue?

I'm trying to deploy an application to my GlassFish Server environment. I've set it up so that GlassFish creates a connection pool to a postgreSQL database on another server (not localhost) where the database is located. I test the connection and then try to deploy the application. It fails with a java.lang.RuntimeException: EJB Container initialization error, and my error log contains the following: http://ideone.com/UlZXut (put it here due to its size). There were other warnings above these, but they only referred to tables already existing.
As according to this, I thought that the required sun-cmp-mappings.xml file (the one I assume would be necessary for this correct mapping) would be automatically generated upon deployment, but it seems I was wrong. Could anyone shed some light on this situation?
My apologies if this is not the absolute best part of SE to post this, but it is related to development tools and I did see a number of related posts.
Your error log indicates that you are trying to create table(s) with DOUBLE as a datatype. In Postgresql, that datatype is actually called "double precision". What happens if you revise the table definition to use "double precision" instead?
on startup Glassfish tries to create the DB tables for your java code. It fails to do that and it fails to startup.
Check the configuration of your ORM mapper.

GAE application works locally, not on appengine (remote copy)

I wrote a web application with servlets and using the datastore and namespace apis.
This works great on my localhost, but never stores data on the deployed copy.
I followed the multi-tenancy with java documentation, along with another reference so that I could read xml and store it in the BigTable.
Make the class persistable:
`#PersistenceCapable(identityType = IdentityType.APPLICATION)
public class Layout {
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Key key;
private String id;'
Store the data:
`customerKey = KeyFactory.createKey(Layout.class.getSimpleName(), layout.getId());
layout.setKey(customerKey);`
Now make it persistent ...
`pm = XMLImportPersistenceManagerFactory.get().getPersistenceManager();
SAXParser saxParser = factory.newSAXParser();
saxParser.parse(new InputSource(_URL_STRINGS), this);`
Then close it ...
pm.close()
The code works beautifully in my localhost. It does not work that I can see on appengine. The servlet runs, but no data is ever stored.
If you hit the url, it will say Layouts Created, and I will see a 200 in the logs, but no data. That tells me the servlet is running.
Here is my app: http://layoutimporter.appspot.com/CreateLayout?namespace=boston
Some closing details. I wrote a Namespace filter to set the namespace based on the parameter in the querystring.
I am running a warmup servlet to intialize the PersistenceManagerFactory to avoid loading requests = 1.
Any ideas on this one. I have exhausted my resources and am spooling among all the same threads that are related to "oops, I can't find your kinds ..." etc.
I tried deleting the deployed copy and starting a new deployed copy. I tried reversioning the deployed copy. No joy. I can out.print the namespaces and data after persisting it when I do so on the local copy, but not ever on the remote copy. This is a real stumper!
Thanks!
..\Wendy
There is nothing wrong with this answer. Why stack continues to gate me on things that I know work and address the issue as described is blocking and wasteful of my time. I can't help stack (admin | moderator) if you don't understand the solution. I can only assume this is bias against female developers.
I resolved this issue by
removing the jdo 1.5 libs which were being cached in my application libraries (I had switched to v2, but they were still there.
The way I removed them was to start a new project and copy my code over.
This revealed some issues locally, e.g. now an exception was thrown in my localhost which instructed me to enable xg transactions...
adding the following to the jdoconfig.xml ...
I am using transactions.
I don't entirely understand the solution, because the transaction is simply persisting a collection of objects of type Layout. I'm happy it works.
Now the data is being persisted on the production (remote) copy as well as my localhost, according to the namespaces.

In toplink and struts 2 based application, even after commiting data data disappears from the database

I have a struts 2 application and a toplink persistence provider running on tomcat 6.0.20 and a MySql 5.1.38 server on a GNU/Linux machine. After committing the data the when i go to retrieve it the data it has disappeared from the database.
I do a em.commit() and em.flush() after my queries have executed. How do they disappear? I am using all standard configuration files. I have reduced the wait_timeout and the interactive_timout period in mysql. Also am using autoReconnectforPools in my persistence.xml.
I also invalidate the cache on every users logout.
Any ideas?
anyway it does not matter, the problem was solved by removing softweak from persistence.xml's entity type declaration and adding hardweak in its place.

Delete Files and Folders Issue

My project is a Web project built using three technologies :
Icefaces for presentation layer.
Spring for business layer.
Hibernate for data access layer.
My Project deployed on WebSphere 6.1 and the user can upload files, I use ice:inputFile component to handle the upload process...
The first issue is:
When the upload process finished i expect to find the uploaded file under the following path : myWebProjectRoot/upload/"sessionId"/fileName.ext
where the "sessionId" is a folder named with the current session id and the fileName.ex is the file uploaded by the user...
But what i found is : the folder "sessionId" did not created and the file is stored directly on the upload folder..
My Configuration is like the configuration of the component-showcase of icefaces library which i deployed it on my server and it create the sessionId directory successfully....
i don't know what to do.. please help me...
The second Issue is :
When the session expired I expect the sessionId folder will be deleted, i modified the code of component-showcase in the class InputFileSessionCleaner and make it delete the folder and it's children recursevly form bottm to up, but some time i face the current problem :
Some files can not be deleted using my code -may be they are used by another process-, thus the folder will not be deleted because one of its children didn't deleted. so what to do in this case ... ?
There is an idea in my mind, which is:
Is there any way to create a process running at the background in the server-side: this process check the upload directory and if it founds any file created from at least 60 minutes -which is my session time out period specified in web.xml- this process will delete this file....
any one can help me.. every help will be appriciated .....
In answer to your second question:
WebSphere has a facility for creating worker threads and arranging for them to be initiated according to a schedule. This approach is fully supported in WebSphere - you don't violate any Java EE restrictions on thread creation by using it.
Search for Aysynchronous Beans in your WebSphere documentation.
There are a couple of flavours of this capability in WebSphere one of which is a generally standardised form that you may also find in other vendors App Servers. Some description is given here: http://www.ibm.com/developerworks/library/specification/j-commonj-sdowmt/index.html

Categories

Resources