I have another question, somewhat related to the one I posted in January. I have a list, which is rich:extendedDataTable component, and it gets updated on the fly, as the user types his search criteria in a separate text box (i.e. the user types in the first 4 characters and as he keeps typing, the results list changes). And in the end it works fine, when I use RichFaces 3, but as I upgraded to RichFaces 4, I've got all sorts of compilation problems. The following classes are no longer accessible and there no suitable replacement for these, it seems:
org.richfaces.model.DataProvider
org.richfaces.model.ExtendedTableDataModel
org.richfaces.model.selection.Selection
org.richfaces.model.selection.SimpleSelection
Here is what it was before:
This is the input text that should trigger the search logic:
<h:inputText id="firmname" value="#{ExtendedTableBean.searchValue}">
<a4j:support ajaxSingle="true" eventsQueue="firmListUpdate"
reRender="resultsTable"
actionListener="#{ExtendedTableBean.searchForResults}" event="onkeyup" />
</h:inputText>
Action listener is what should update the list. Here is the extendedDataTable, right below the inputText:
<rich:extendedDataTable tableState="#{ExtendedTableBean.tableState}" var="item"
id="resultsTable" value="#{ExtendedTableBean.dataModel}">
... <%-- I'm listing columns here --%>
</rich:extendedDataTable>
And here's the back-end code, where I use my data model handling:
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package com.beans;
import java.io.FileInputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.ConcurrentModificationException;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.CopyOnWriteArrayList;
import javax.faces.context.FacesContext;
import javax.faces.event.ActionEvent;
import org.richfaces.model.DataProvider;
import org.richfaces.model.ExtendedTableDataModel;
public class ExtendedTableBean {
private String sortMode="single";
private ExtendedTableDataModel<ResultObject> dataModel;
//ResultObject is a simple pojo and getResultsPerValue is a method that
//read the data from the properties file, assigns it to this pojo, and
//adds a pojo to the list
private Object tableState;
private List<ResultObject> results = new CopyOnWriteArrayList<ResultObject>();
private List<ResultObject> selectedResults =
new CopyOnWriteArrayList<ResultObject>();
private String searchValue;
/**
* This is the action listener that the user triggers, by typing the search value
*/
public void searchForResults(ActionEvent e) {
synchronized(results) {
results.clear();
}
//I don't think it's necessary to clear results list all the time, but here
//I also make sure that we start searching if the value is at least 4
//characters long
if (this.searchValue.length() > 3) {
results.clear();
updateTableList();
} else {
results.clear();
}
dataModel = null; // to force the dataModel to be updated.
}
public List<ResultObject> getResultsPerValue(String searchValue) {
List<ResultObject> resultsList = new CopyOnWriteArrayList<ResultObject>();
//Logic for reading data from the properties file, populating ResultObject
//and adding it to the list
return resultsList;
}
/**
* This method updates a firm list, based on a search value
*/
public void updateTableList() {
try {
List<ResultObject> searchedResults = getResultsPerValue(searchValue);
//Once the results have been retrieved from the properties, empty
//current firm list and replace it with what was found.
synchronized(firms) {
firms.clear();
firms.addAll(searchedFirms);
}
} catch(Throwable xcpt) {
//Exception handling
}
}
/**
* This is a recursive method, that's used to constantly keep updating the
* table list.
*/
public synchronized ExtendedTableDataModel<ResultObject> getDataModel() {
try {
if (dataModel == null) {
dataModel = new ExtendedTableDataModel<ResultObject>(
new DataProvider<ResultObject>() {
public ResultObject getItemByKey(Object key) {
try {
for(ResultObject c : results) {
if (key.equals(getKey(c))){
return c;
}
}
} catch (Exception ex) {
//Exception handling
}
return null;
}
public List<ResultObject> getItemsByRange(
int firstRow, int endRow) {
return Collections.unmodifiableList(results.subList(firstRow, endRow));
}
public Object getKey(ResultObject item) {
return item.getResultName();
}
public int getRowCount() {
return results.size();
}
});
}
} catch (Exception ex) {
//Exception handling
}
return dataModel;
}
//Getters and setters
}
Now that the classes ExtendedTableDataModel and DataProvider are no longer available, what should I be using instead? RichFaces forum claims there's really nothing and developers are pretty much on their own there (meaning they have to do their own implementation). Does anyone have any other idea or suggestion?
Thanks again for all your help and again, sorry for a lengthy question.
You could convert your data model to extend the abstract org.ajax4jsf.model.ExtendedDataModel instead which actually is a more robust and performant datamodel for use with <rich:extendedDataTable/>. A rough translation of your existing model to the new one below (I've decided to use your existing ExtendedDataModel<ResultObject> as the underlying data source instead of the results list to demonstrate the translation):
public class MyDataModel<ResultObject> extends ExtendedDataModel<ResultObject>{
String currentKey; //current row in the model
Map<String, ResultObject> cachedResults = new HashMap<String, ResultObject>(); // a local cache of search/pagination results
List<String> cachedRowKeys; // a local cache of key values for cached items
int rowCount;
ExtendedTableDataModel<ResultObject> dataModel; // the underlying data source. can be anything
public void setRowKey(Object item){
this.currentKey = (ResultObject)item.getResultName();
}
public void walk(FacesContext context, DataVisitor visitor, Range range, Object argument) throws IOException {
int firstRow = ((SequenceRange)range).getFirstRow();
int numberOfRows = ((SequenceRange)range).getRows();
cachedRowkeys = new ArrayList<String>();
for (ResultObject result : dataModel.getItemsByRange(firstRow,numberOfRows)) {
cachedRowKeys.add(result.getResultName());
cachedResults.put(result.getResultName(), result); //populate cache. This is strongly advised as you'll see later.
visitor.process(context, result.getResultName(), argument);
}
}
}
public Object getRowData() {
if (currentKey==null) {
return null;
} else {
ResultObject selectedRowObject = cachedResults.get(currentKey); // return result from internal cache without making the trip to the database or other underlying datasource
if (selectedRowObject==null) { //if the desired row is not within the range of the cache
selectedRowObject = dataModel.getItemByKey(currentKey);
cachedResults.put(currentKey, selectedRowObject);
return selectedRowObject;
} else {
return selectedRowObject;
}
}
public int getRowCount(){
if(rowCount == 0){
rowCount = dataModel.getRowCount(); //cache row count
return rowCount;
}
return rowCount
}
Those are the 3 most important methods in that class. There are a bunch of other methods, basically carry over from legacy versions that you don't need to worry yourself about. If you're saving JSF state to client, you might be interested in the org.ajax4jsf.model.SerializableDataModel for serialization purposes. See an example for that here. It's an old blog but the logic is still applicable.
Unrelated to this, your current implementation of getRowData will perform poorly in production grade app. Having to iterate thru every element to return a result? Try a better search algorithm.
Related
We have a legacy application that reads data from mongo for each user (query result is small to large based on user request) and our app creates a file for each user and drops to FTP server /s3. We are reading data as mongo cursor and writing each batch to file as soon it gets batch data so file writing performance is decent. This application works great but bound to mongo and mongo cursor.
Now we have to redesign this application as we have to support different data sources i.e MongoDB, Postgres DB, Kinesis, S3, etc. We have thought below ideas so far:
Build data APIs for each source and expose a paginated REST response. This is a feasible solution but it might be slow for large
query data compare to the current cursor response.
Build a data abstraction layer by feeding batch data in kafka and read batch data stream in our file generator.but most of the time user asks for sorted data so we would need to read messages in sequence. We will lose benefit of great throughput and lot of extra work to combine these data message before writing to file.
We are looking for a solution to replace the current mongo cursor and make our file generator independent of the data source.
So it sounds like you essentially want to create an API where you can maintain the efficiency of streaming as much as possible, as you are doing with writing the file while you are reading the user data.
In that case, you might want to define a push-parser API for your ReadSources which will stream data to your WriteTargets which will write the data to anything that you have an implementation for. Sorting will be handled on the ReadSource side of things since for some sources you can read in an ordered manner (such as from databases); For those sources for which you can't do this you might simply perform an intermediate step to sort your data (such as write to a temporary table) then stream it to the WriteTarget.
A basic implementation might look vaguely like this:
public class UserDataRecord {
private String data1;
private String data2;
public String getRecordAsString() {
return data1 + "," + data2;
}
}
public interface WriteTarget<Record> {
/** Write a record to the target */
public void writeRecord(Record record);
/** Finish writing to the target and save everything */
public void commit();
/** Undo whatever was written */
public void rollback();
}
public abstract class ReadSource<Record> {
protected final WriteTarget<Record> writeTarget;
public ReadSource(WriteTarget<Record> writeTarget) { this.writeTarget = writeTarget; }
public abstract void read();
}
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class RelationalDatabaseReadSource extends ReadSource<UserDataRecord> {
private Connection dbConnection;
public RelationalDatabaseReadSource (WriteTarget<UserDataRecord> writeTarget, Connection dbConnection) {
super(writeTarget);
this.dbConnection = dbConnection;
}
#Override public void read() {
// read user data from DB and encapsulate it in a record
try (Statement statement = dbConnection.createStatement();
ResultSet resultSet = statement.executeQuery("Select * From TABLE Order By COLUMNS");) {
while (resultSet.next()) {
UserDataRecord record = new UserDataRecord();
// stream the records to the write target
writeTarget.writeRecord(record);
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.charset.StandardCharsets;
public class FileWriteTarget implements WriteTarget<UserDataRecord> {
private File fileToWrite;
private PrintWriter writer;
public FileWriteTarget(File fileToWrite) throws IOException {
this.fileToWrite = fileToWrite;
this.writer = new PrintWriter(new FileWriter(fileToWrite));
}
#Override public void writeRecord(UserDataRecord record) {
writer.println(record.getRecordAsString().getBytes(StandardCharsets.UTF_8));
}
#Override public void commit() {
// write trailing records
writer.close();
}
#Override
public void rollback() {
try { writer.close(); } catch (Exception e) { }
fileToWrite.delete();
}
}
This is just the general idea and needs serious improvement.
Anyone please feel free to update this API.
I looked a lot of stuff on on internet but I don't found any solution for my needs.
Here is a sample code which doesn't work but show my requirements for better understanding.
#Service
public class FooCachedService {
#Autowired
private MyDataRepository dataRepository;
private static ConcurrentHashMap<Long, Object> cache = new ConcurrentHashMap<>();
public void save(Data data) {
Data savedData = dataRepository.save(data);
if (savedData.getId() != null) {
cache.put(data.getRecipient(), null);
}
}
public Data load(Long recipient) {
Data result = null;
if (!cache.containsKey(recipient)) {
result = dataRepository.findDataByRecipient(recipient);
if (result != null) {
cache.remove(recipient);
return result;
}
}
while (true) {
try {
if (cache.containsKey(recipient)) {
result = dataRepository.findDataByRecipient(recipient);
break;
}
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return result;
}
}
and data object:
public class Data {
private Long id;
private Long recipient;
private String payload;
// getters and setters
}
As you can see in code above I need implement service which will be stored new data into database and into cache as well.
Whole algorithm should looks something like that:
Some userA create POST request to my controller to store data and it fire save method of my service.
Another userB logged in system send request GET to my controller which fire method load of my service. In this method is compared logged user's id which sent request with recipients' ids in map. If map contains data for this user they are fetched with repository else algorithm check every second if there are some new data for that user (this checking will be some timeout, for example 30s, and after 30s request return empty data, and user create new GET request and so on...)
Can you tell me if it possible do it with some elegant way and how? How to use cache for that or what is the best practice for that? I am new in this area so I will be grateful for any advice.
I have a session scoped bean that reads the contents of a view and stores a hashmap of SelectItem objects. These are then used to populate combobox values on an XPage.
The code works fine EXCEPT when I try to recycle a ViewEntry object in the innermost loop. If the line is commented out, I get my SelectItems returned fine. If the line is uncommented, my comboboxes are empty. So what I'd like to know is:
Why is this happening?
How do I fix this and correctly recycle the object like a good developer should?
If I can't explicitly recycle the object, what is the potential impact on memory etc?
Code (with problem line highlighted and currently commented out):
public class comboBox extends HashMap<String, List<SelectItem>> implements Serializable {
private static final long serialVersionUID = 1L;
public comboBox() {
this.init();
}
private void init() {
Database database = null;
View vwData = null;
ViewNavigator navData = null;
try {
PrimeUser thisUser = (PrimeUser) resolveVariable(FacesContext.getCurrentInstance(), "PrimeUser");
String lang = thisUser.getLang();
database = getCurrentDatabase();
vwData = database.getView("DataLookup");
navData = vwData.createViewNavFromCategory(lang);
ViewEntry keyCat = navData.getFirst();
ViewEntry nextCat = null;
while (keyCat != null) {
String thisKey = (String) keyCat.getColumnValues().get(1);
List<SelectItem> options = new ArrayList<SelectItem>();
options.add(new SelectItem("Please select..."));
ViewEntry dataEntry = navData.getChild(keyCat);
ViewEntry nextChild = null;
while (dataEntry != null) {
nextChild = navData.getNextSibling();
Object optValue = dataEntry.getColumnValues().get(2);
String optLabel = (String) dataEntry.getColumnValues().get(3);
options.add(new SelectItem(optValue, optLabel));
// dataEntry.recycle(); //<---- PROBLEM HERE
dataEntry = nextChild;
}
this.put(thisKey, options);
nextCat = navData.getNextCategory();
keyCat.recycle();
keyCat = nextCat;
}
} catch (NotesException e) {
e.printStackTrace();
} finally {
try {
navData.recycle();
vwData.recycle();
database.recycle();
} catch (NotesException e) {
}
}
}
public List<SelectItem> get(String key) {
return this.get(key);
}
}
RESOLVED:
The resolution was annoyingly simple (and thanks to Sven Hasselbach for reminding me to review the Server log output again). The change was straightforward - exchange this line:
nextCat = navData.getNextCategory();
for this:
nextCat = getNextSibling(keyCat);
Although "getNextCategory" is a method of ViewNavigator and does not have any parameters; it only gets the next category following the current entry. If the current entry has been recycled - as was the case above - then obviously the current entry is null and the method throws an exception.
"getNextSibling" has an optional ViewEntry parameter which, if specified, will override the pointer to the current ViewEntry that it would otherwise use.
I blame Fridays.
First look at your server console: Do you see a NotesException there? If you run into the Exception, your put will never be executed, so your comboboxes will be empty.
Before you recycle an object, you have to test if it is null, surrounded by a try/catch block. There are some implementations for an easier recycling of Notes objects, f.e. this one http://openntf.org/XSnippets.nsf/snippet.xsp?id=recycleobject-helper-for-recycling-of-objects
A good explanation about recycling can be found here: http://nathantfreeman.wordpress.com/2013/03/21/recycle-and-the-retail-experience/
I'm currently trying to communicate between java and flex by using sockets and AMF serialized objects.
On the java side I use Amf3Input and Amf3Output from BlazeDS (flex-messaging-common.jar and flex-messaging-core.jar).
The connection is correctly established, and if i try to send object from flex to java, i can easily read objects :
FLEX side :
protected function button2_clickHandler(event:MouseEvent):void
{
var tmp:FlexAck = new FlexAck;
tmp.id="123456789123456789123456789";
tmp.name="A";
tmp.source="Aaaaaa";
tmp.ackGroup=false;
s.writeObject(tmp);
s.flush();
}
JAVA side :
ServerSocket servSoc = new ServerSocket(8888);
Socket s = servSoc.accept();
Amf3Output amf3Output = new Amf3Output(SerializationContext.getSerializationContext());
amf3Output.setOutputStream(s.getOutputStream());
Amf3Input amf3Input = new Amf3Input(SerializationContext.getSerializationContext());
amf3Input.setInputStream(s.getInputStream());
while(true)
{
try
{
Object obj = amf3Input.readObject();
if(obj!=null){
if (obj instanceof AckOrder){
System.out.println(((AckOrder)obj).getId());
}
}
}
catch (Exception e)
{
e.printStackTrace();
break;
}
}
amf3Output.close();
amf3Input.close();
servSoc.close();
In this way it works perfectly, but the problem is to read objects sent from the java side.
The code I use in java is :
for(int i=0;i<10;i++){
ack = new AckOrder(i,"A","B", true);
amf3Output.writeObject(ack);
amf3Output.writeObjectEnd();
amf3Output.flush();
}
I have an handler on ProgressEvent.SOCKET_DATA :
trace((s.readObject() as FlexAck).id);
But I have errors such as :
Error #2030: End of File detected
Error #2006: Index Out of bound
If i add manipulations on ByteArrays, i manage to read the first object, but not the following.
s.readBytes(tmp,tmp.length);
content = clone(tmp);
(content.readObject());
trace("########################## OK OBJECT RECEIVED");
var ack:FlexAck = (tmp.readObject() as FlexAck);
trace("**********************> id = "+ack.id);
I've spent many our trying to find something in several forums etc, but nothing helped.
So if someone could help me it would be great.
Thanks
Sylvain
EDIT :
Here is an example that I thought should work, but doesn't I hope that it's better illustrate what I aim to do (permanent connection with socket and an exchange of messages).
Java class :
import java.net.ServerSocket;
import java.net.Socket;
import awl.oscare.protocol.AckOrder;
import flex.messaging.io.SerializationContext;
import flex.messaging.io.amf.Amf3Input;
import flex.messaging.io.amf.Amf3Output;
public class Main {
public static void main(String[] args) {
while(true)
{
try {
ServerSocket servSoc = new ServerSocket(8888);
Socket s = servSoc.accept();
System.out.println("connection accepted");
Amf3Output amf3Output = new Amf3Output(SerializationContext.getSerializationContext());
amf3Output.setOutputStream(s.getOutputStream());
Amf3Input amf3Input = new Amf3Input(SerializationContext.getSerializationContext());
amf3Input.setInputStream(s.getInputStream());
while(true)
{
try
{
System.out.println("Reading object");
Object obj = amf3Input.readObject();
if(obj!=null)
{
System.out.println(obj.getClass());
if (obj instanceof AckOrder)
{
AckOrder order = new AckOrder();
order.setId(((AckOrder)obj).getId());
order.setName(((AckOrder)obj).getName());
order.setSource(((AckOrder)obj).getSource());
order.setAckGroup(((AckOrder)obj).isAckGroup());
System.out.println(((AckOrder)obj).getId());
amf3Output.writeObject(order);
amf3Output.writeObjectEnd();
amf3Output.flush();
}
}
}
catch (Exception e)
{
e.printStackTrace();
break;
}
}
amf3Output.close();
amf3Input.close();
servSoc.close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
Java Serializable object :
package protocol;
import java.io.Serializable;
public class AckOrder implements Serializable {
private static final long serialVersionUID = 5106528318894546695L;
private String id;
private String name;
private String source;
private boolean ackGroup = false;
public String getId() {
return this.id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public void setSource(String source) {
this.source = source;
}
public String getSource() {
return this.source;
}
public void setAckGroup(boolean ackGroup) {
this.ackGroup = ackGroup;
}
public boolean isAckGroup() {
return this.ackGroup;
}
public AckOrder()
{
super();
}
}
Flex Side :
Main flex code :
<fx:Script>
<![CDATA[
import mx.collections.ArrayCollection;
import mx.controls.Alert;
import mx.events.FlexEvent;
import mx.utils.object_proxy;
private var _socket:Socket = new Socket();;
private function onCreationComplete():void
{
this._socket.connect("localhost",8888);
this._socket.addEventListener(ProgressEvent.SOCKET_DATA, onData);
}
private function onData(e:ProgressEvent):void
{
if(this._socket.bytesAvailable)
{
this._socket.endian = Endian.LITTLE_ENDIAN;
var objects:Array = [];
try{
while(this._socket.bytesAvailable > 0)
{
objects.push(this._socket.readObject());
}
}catch(e:Error){trace(e.message);}
trace("|"+(objects)+"|");
}
}
protected function sendButton_clickHandler(event:MouseEvent):void
{
var tmp:FlexAck = new FlexAck;
tmp.id="1";
tmp.name="A";
tmp.source="B";
tmp.ackGroup=false;
this._socket.writeObject(tmp);
this._socket.flush();
}
]]>
</fx:Script>
<s:Button x="0" y="0" name="send" label="Send" click="sendButton_clickHandler(event)"/>
Flex serializable object :
package
{
[Bindable]
[RemoteClass(alias="protocol.AckOrder")]
public class FlexAck
{
public function FlexAck()
{
}
public var id:String;
public var name:String;
public var source:String;
public var ackGroup:Boolean;
}
}
Edit 25/05/2011 :
I've added those listeners in my flex code :
this._socket.addEventListener(Event.ACTIVATE,onActivate);
this._socket.addEventListener(Event.CLOSE,onClose);
this._socket.addEventListener(Event.CONNECT,onConnect);
this._socket.addEventListener(Event.DEACTIVATE,onDeactivate);
this._socket.addEventListener(IOErrorEvent.IO_ERROR,onIOerror);
this._socket.addEventListener(SecurityErrorEvent.SECURITY_ERROR,onSecurityError);
But There's no errors and I still don't manage to receive objects correctly.
You have to send the AMF data as ByteArray on the server:
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
amf3Output.setOutputStream(baos);
amf3Output.writeObject(order);
amf3Output.flush();
amf3Output.close();
s.getOutputStream().write(baos.toByteArray());
Then
this._socket.readObject()
works as expected !
Hi the problem is caused by the following:
An AMF stream is stateful. When it serializes objects, it compresses them relative to objects that it have already been written.
Compression is achieved by referencing previously sent class descriptions, string values and objects using indexes (so for example, if the first string you sent was "heloWorld", when you later send that string, the AMF stream will sent string index 0).
Unfortunately, ByteArray and Socket do not maintain reference tables between readObject calls. Thus, even if you keep appending your newly read objects to the end of the same ByteArray object, each call to readObject instantiates new reference tables, discarding previously created ones (this means it should work for repeated references to the same string within an object tree)
In your example, you are always writing the same string values to properties. Thus when you send the second object, its string properties are not serialized as strings, but as references to the strings in the previously written object.
The solution, is to create a new AMF stream for each object you send.
This is complete rubbish of course(!) It means we can't really utilize the compression in custom protocols. It would be much better if our protocols could decide when to reset the these reference tables, perhaps when they got too big.
For example, if you have an RPC protocol, it would be nice to have an AMF stream pass the remote method names as references rather than strings for speed...
I haven't checked but I think this sort of thing is done by RTMP. The reason it probably wouldn't have been made available in developer objects like ByteArray and Socket (sigh, I hope this isn't true) is because Adobe wants to push us towards LCDS...
Addendum/edit: just found this, which provides a solution http://code.google.com/p/cvlib/
After looking at the code, I think what you want to do on the Java end is this:
for(int i=0;i<10;i++){
ack = new AckOrder(i,"A","B", true);
amf3Output.writeObject(ack);
}
amf3Output.flush();
When you do 'flush', you're sending information over the socket so you only had one object being sent at a time. On the Flex end, you should always try to see what's the length of the object and make sure you're not going over it which would cause this error.
EDIT:
private var _socket:Socket = new Socket();
private function onCreationComplete():void
{
// Add connection socket info here
this._socket.addEventListener(ProgressEvent.SOCKET_DATA, onData);
}
// This gets called every time we get new info, as in after the server flushes
private function onData(e:ProgressEvent):void
{
if(this._socket.bytesAvailable)
{
this._socket.endian = Endian.LITTLE_ENDIAN; // Might not be needed, but often is
// Try to get objects
var objects:Array = [];
try{
while(this._socket.bytesAvailable > 0)
{
objects.push(this._socket.readObject());
}
}catch(e:Error){}
// Do something with objects array
}
}
The onData function is called continually (every time the server sends info) since everything is asynchronous.
I got another JCo-related question and hopefully finding help.
With JCo you can easily build up a connection like it is explained in the example sheets which came with the JCo-library. Unfortunately, the only way building a connection is handled with a created property file. It wouldn´t be that bad, if there wasn´t any sensible data in it. But at least, the password for the SAP user stands in the file, so it is a lack of safety in this way of connection-handling. The manual of JCo says so, too :
"For this example the destination configuration is stored in a file that is called by the program. In practice you should avoid this for security reasons."
but couldn´t find a working solution after all. There are a palmful threads about this theme, like this
http://forums.sdn.sap.com/thread.jspa?messageID=7303957
but none of them are helpful. I really can´t figure out a solution and neither find one. Actually I solved the security-problem with deleting the file after building the connection, but this is not a satisfying solution. There have to be a better way getting the parameter for the connection, especially when it stands in the manual, but I have no glue how.
Anybody already worked with JCo 3.0 and knows this problem?
Yes, that's possible. You have to create your own implementation of DestinationDataProvider and register it using Environment.registerDestinationDataProvider(). However your DDP obtains the connection data and credentials is up to you. Take a look at net.sf.rcer.conn.connections.ConnectionManager, there's a working example in there.
You need to
copy the private class starting on line 66 and adapt it to your own needs (that is, fetch the connection data from wherever you want to)
perform the registration (line 204) somewhere during the startup of your application
get the connection using some string identifier that will be passed to your DestinationDataProvider.
It's a bit confusing, it was dificult to me how to figure this too.
All you need is an object of type java.util.Properties to fill the desired fields, but it's up to ou how to fill this object.
I dit it through a ValueObject, I can fill this VO from a file, database, web form...
JCOProvider jcoProvider = null;
SAPVO sap = new SAPVO(); // Value Object
Properties properties = new Properties();
if(jcoProvider == null) {
// Get SAP config from DB
try {
sap = SAPDAO.getSAPConfig(); // DAO object that gets conn data from DB
} catch (Exception ex) {
throw new ConexionSAPException(ex.getMessage());
}
// Create new conn
jcoProvider = new JCOProvider();
}
properties.setProperty(DestinationDataProvider.JCO_ASHOST, sap.getJCO_ASHOST());
properties.setProperty(DestinationDataProvider.JCO_SYSNR, sap.getJCO_SYSNR());
properties.setProperty(DestinationDataProvider.JCO_CLIENT, sap.getJCO_CLIENT());
properties.setProperty(DestinationDataProvider.JCO_USER, sap.getJCO_USER());
properties.setProperty(DestinationDataProvider.JCO_PASSWD, sap.getJCO_PASSWD());
properties.setProperty(DestinationDataProvider.JCO_LANG, sap.getJCO_LANG());
// properties.setProperty(DestinationDataProvider.JCO_TRACE, "10");
try {
jcoProvider.changePropertiesForABAP_AS(properties);
} catch (Exception e) {
throw new ConexionSAPException(e.getMessage());
}
The JCOProvider class:
import com.sap.conn.jco.ext.DestinationDataEventListener;
import com.sap.conn.jco.ext.DestinationDataProvider;
import com.sap.conn.jco.ext.Environment;
import es.grupotec.ejb.util.ConexionSAPException;
import java.util.Properties;
public class JCOProvider implements DestinationDataProvider {
private String SAP_SERVER = "SAPSERVER";
private DestinationDataEventListener eventListener;
private Properties ABAP_AS_properties;
public JCOProvider() {
}
#Override
public Properties getDestinationProperties(String name) {
if (name.equals(SAP_SERVER) && ABAP_AS_properties != null) {
return ABAP_AS_properties;
} else {
return null;
}
// if(ABAP_AS_properties!=null) return ABAP_AS_properties;
// else throw new RuntimeException("Destination " + name + " is not available");
}
#Override
public boolean supportsEvents() {
return true;
}
#Override
public void setDestinationDataEventListener(DestinationDataEventListener eventListener) {
this.eventListener = eventListener;
}
public void changePropertiesForABAP_AS(Properties properties) throws ConexionSAPException {
try {
if (!Environment.isDestinationDataProviderRegistered()) {
if (ABAP_AS_properties == null) {
ABAP_AS_properties = properties;
}
Environment.registerDestinationDataProvider(this);
}
if (properties == null) {
if (eventListener != null) {
eventListener.deleted(SAP_SERVER);
}
ABAP_AS_properties = null;
} else {
ABAP_AS_properties = properties;
if (eventListener != null) {
eventListener.updated(SAP_SERVER);
}
}
} catch (Exception ex) {
throw new ConexionSAPException(ex.getMessage());
}
}
}
Regards