I have a problem with DBUnit in my testc cases. When I create my data in db unit I currently explicitly specify Ids. It looks something like this.
<users user_id="35" corpid="CORP\35" last_login="2014-10-27 00:00:00.0" login_count="1" is_manager="false"/>
<plans plan_id="18332" state="1" owned_by_user="35" revision="4"/>
<plan_history plan_history_id="12307" date_created="2014-08-29 14:40:08.356" state="0" plan_id="18332"/>
<plan_history plan_history_id="12308" date_created="2014-08-29 16:40:08.356" state="1" plan_id="18332"/>
<goals goal_id="12331" goal_name="Dansa" description="Dans"/>
<personal_goals plan_id="18332" personal_goal_id="18338" date_finished="2014-10-28 00:00:00.192" goal_id="12331" state="0"/>
<personal_goal_history personal_goal_id="18338" personal_goal_history_id="18005" date_created="2014-08-29 14:40:08.356" state="1" />
<activities activity_id="13001"/>
<custom_activities activity_name="customActivity" description="Replace" activity_id="13001"/>
<personal_activities personal_activity_id="17338" personal_goal_id="18338" date_finished="2014-10-28 00:00:00.192" state="0" activity_id="13000"/>
<personal_activity_history personal_activity_id="17338" personal_activity_history_id="18338" date_created="2014-08-29 14:40:29.073" state="1" />
Since the id of the user is specified literally we often get merge problems between tests and they are really cumbersome to solve. This is because we may be working on different branches and several people may have allocated the same ids. The solution then becomes updating all ids in the seed data and all relational ids as well as updating the test-files. This work is really cumbersome.
Im there for looking for some way to autogenerate ids. For instance functions like getNextId("User") and getLatestId("User") would be of great help. Is there something like this in DB unit or could I myself some how create such functions?
If there are other suggestions to how this problem can be avoided Id gladly here them as well.
Sounds like you are using same test data file for all tests. It is better practice to have multiple test files - one per test for its test specific data and common files used for "master list" data. The "master list" data does not change per test, thereby not encountering the mentioned problem when merging test data files.
Related
I try Corb to search and update node in large number of documents:
Sample input:
<hcmt xmlns="http://horn.thoery">
<susceptible>X</susceptible>
<reponsible>foresee–intervention</reponsible>
<intend>Benefit Protagonist</intend>
<justified>Goal Outwiegen</justified>
</hcmt>
Xquery:
(: let $resp := "foresee–intervention" :)
let $docs :=
cts:search(doc(),
cts:and-query((
cts:collection-query("hcmt"),
cts:path-range-query("/horn:hcmt/horn:responsible", "=", $resp)
))
)
return
for $doc in $docs
return
xdmp:node-replace($doc/horn:hcmt/horn:responsible, "Foresee Intervention")
Expected output:
<hcmt xmlns="http://horn.thoery">
<susceptible>X</susceptible>
<reponsible>Foresee Intervention</reponsible>
<intend>Benefit Protagonist</intend>
<justified>Goal Outwiegen</justified>
</hcmt>
But node-replace didn’t happen in Corb and no error returns. Other queries work fine in Corb. How can the node-replace work correctly in Corb?
Thanks in advance for any help.
I create functions to reconcile the encoding matters. This not only mitigates potential API transaction failures but also is a requisite to validate & encode parameter or element/property/uri name.
That said, a sample MarkLogic Java API implementation is:
Create a dynamic query construct in the filesystem, in my case, product-query-option.xml (use the query value directly: Chooser–Option)
<search xmlns="http://marklogic.com/appservices/search">
<query>
<and-query>
<collection-constraint-query>
<constraint-name>Collection</constraint-name>
<uri>proto</uri>
</collection-constraint-query>
<range-constraint-query>
<constraint-name>ProductType</constraint-name>
<value>Chooser–Option</value>
</range-constraint-query>
</and-query>
</query>
</search>
Deploy the persistent query options to modules database, in my case, search-lexis.xml, the options file is like:
<options xmlns="http://marklogic.com/appservices/search">
<constraint name="Collection">
<collection prefix=""/>
</constraint>
<constraint name="ProductType">
<range type="xs:string" collation="http://marklogic.com/collation/en/S1">
<path-index xmlns:prod="schema://fc.fasset/product">/prod:requestProduct/prod:_metaData/prod:productType</path-index>
</range>
</constraint>
</options>
Follow on from Dynamic Java Search
File file = new File("src/main/resources/queryoption/product-query-option.xml");
FileHandle fileHandle = new FileHandle(file);
RawCombinedQueryDefinition rcqDef = queryMgr.newRawCombinedQueryDefinition(fileHandle, queryOption);
You can, assuredly, combine the query and the options as one handle in QueryDefinition.
Your original node-replace is translated as Java Partial Update
make sure the DocumentPatchBuilder setNamespaces with the correct NamespaceContext.
For batch data operation, the performant approach is MarkLogic Data Movement: instantiate the QueryBatcher with the searched Uris, supply the replace value or data fragment PatchBuilder.replaceValue
, and complete the batch with
dbClient.newXMLDocumentManager().patch(uri, patchHandle);
MarkLogic Data Services: If you succeed above, perhaps, then go at a more robust and scalable enterprise SOA approach, please review Data Services.
The implementation with Gradle is like:
(Note, all of the transformation metrics should be parameters, including path/element/property name, namespace, value…etc. Nothing is hardcoded.) One proxy service declared in service.json can serve multiple end points (under /root/df-ds/fxd ) with different types of modules which give you the free rein to develop pure Java or extend the development platform to handle complex data operations.
If these operations are persistent node update, you should consider in-memory node transform before the ingestion. Besides the MarkLogic data transformation tools, you can harness the power of XSLT2+.
Saxon XPathFactory could be a serviceable vehicle to query/transform node. Not sure if it is a reciprocity, ML Java API implements the XPath compile to split large paths and stream transaction. XSLT/Saxon is not my forte; therefore, I can’t comment how comparable it is with this encode/decode particularity or how it handles transaction (insert, update…etc) streaming.
I am trying to create a framework using selenium and TestNG. As a part of the framework i am trying to implement Data Parameterization. But i am confused about optimized way of implementing Data parameterization. Here is the following approaches i made.
With Data Providers (from excel i am reading and storing in object[][])
With testng.xml
Issues with Data Providers:
Lets say if my Test needs to handle large volumes of data , say 15 different data, then i need to pass 15 parameters to it. Alternative , if i try to create a class TestData to handle this parameters and maintain in it , then for every Test there will be different data sets. so my TestData class will be filled with more than 40 different params.
Eg: In a Ecom Web site , There will be many different params exists like for Accounts , cards , Products , Rewards , History , store Locations etc., for this we may need atleast 40 different params need to declared in Test Data.Which i am thinking not a suggestable solution. Some other tests may need sometimes 10 different test data, some may need 12 . Even some times in a single test one iteration i need only 7 params in other iteration i need 12 params .
How do i manage it effectively?
Issues with Testng.xml
Maintaining 20 different accounts , 40 different product details , cards , history all in a single xml file and configuring test suite like parallel execution , configuring only particular classes to execute etc., all together will mess the testng.xml file
So can you please suggest which is a optimized way to handle data in Testing Framework .
How in real time the data parameterization , iterations with different test datas will be handled
Assuming that every test knows what sort of test data it is going to be receiving here's what I would suggest that you do :
Have your testng suite xml file pass in the file name from which data is to be read to the data provider.
Build your data provider such that it receives the file name from which to read via TestNG parameters and then builds a generic map as test data iteration (Every test will receive its parameters as a key,value pair map) and then work with the passed in map.
This way you will just one data provider which can literally handle anything. You can make your data provider a bit more sophisticated by having it deal with test methods and then provide the values accordingly.
Here's a skeleton implementation of what I am talking about.
public class DataProviderExample {
#Test (dataProvider = "dp")
public void testMethod(Map<String, String> testdata) {
System.err.println("****" + testdata);
}
#DataProvider (name = "dp")
public Object[][] getData(ITestContext ctx) {
//This line retrieves the value of <parameter name="fileName" value="/> from within the
//<test> tag of the suite xml file.
String fileName = ctx.getCurrentXmlTest().getParameter("fileName");
List<Map<String, String>> maps = extractDataFrom(fileName);
Object[][] testData = new Object[maps.size()][1];
for (int i = 0; i < maps.size(); i++) {
testData[i][0] = maps.get(i);
}
return testData;
}
private static List<Map<String, String>> extractDataFrom(String file) {
List<Map<String, String>> maps = Lists.newArrayList();
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
return maps;
}
}
I'm actually currently trying to do the same (or similar) thing. I write automation to validate product data on several eComm sites.
My old method
The data comes in Excel sheet format that I process slightly to get in a format that I want. I run the automation that reads from Excel and executes the runs sequentially.
My new method (so far, WIP)
My company recently started using SauceLabs so I started prototyping ways to take advantage of X # of VMs in parallel and see the same issues as you. This isn't a polished or even a finished solution. It's something I'm currently working on but I thought I would share some of what I'm doing to see if it will help you.
I started reading SauceLabs docs and ran across the sample code below which started me down the path.
https://github.com/saucelabs-sample-scripts/C-Sharp-Selenium/blob/master/SaucePNUnit_Test.cs
I'm using NUnit and I found in their docs a way to pass data into the test that allows parallel execution and allows me to store it all neatly in another class.
https://github.com/nunit/docs/wiki/TestFixtureSource-Attribute
This keeps me from having a bunch of [TextFixture] tags stacked on top of my script class (as in the demo code above). Right now I have,
[TestFixtureSource(typeof(Configs), "StandardBrowsers")]
[Parallelizable]
public class ProductSetupUnitTest
where the Configs class contains an object[] called StandardBrowsers like
public class Configs
{
static object[] StandardBrowsers =
{
new object[] { "chrome", "latest", "windows 10", "Product Name1", "Product ID1" },
new object[] { "chrome", "latest", "windows 10", "Product Name2", "Product ID2" },
new object[] { "chrome", "latest", "windows 10", "Product Name3", "Product ID3" },
new object[] { "chrome", "latest", "windows 10", "Product Name4", "Product ID4" },
};
I actually got this working this morning so I know now the approach will work and I'm working on ways to further tweak and improve it.
So, in your case you would just load up the object[] with all the data you want to pass. You will probably have to declare a string for each of the possible fields you might want to pass. If you don't need that particular field in this run, then pass empty string.
My next step is to load the object[] by loading the data from Excel. The pain for me is how to do logging. I have a pretty mature logging system in my existing sequential execution script. It's going to be hard to give that up or setting for something with reduced functionality. Currently I write everything to a CSV, load that into Excel, and then I can quickly process failures using Excel filtering, etc. My current thought is to have each script write it's own CSV and then pull them all together after all the runs are complete. That part is still theoretical right now though.
Hope this helps. Feel free to ask me questions if something isn't clear. I'll answer what I can.
I've read https://developer.jboss.org/wiki/HibernateCoreMigrationGuide30 and much extra information on the web, and I couldn't find anything related to this. There is no spec anywhere.
What I would like to know, is if there is a way to achieve in (hibernate3 -> org.hibernate.tool.ant.HibernateToolTask) what I did using (hibernate2 -> net.sf.hibernate.tool.hbm2java.FinderRenderer).
I need to migrate from Hibernate2 to Hibernate3, and it was going good until I reached some *finder.java which were generated by Hibernate2. Then, I'm trying to do the same using Hibernate3, but I find no way to do it.
I'm using jpos, so before Hibernate3, *Finder.java pojos where generated by _codegen.xml file located in eecore module. _codegen.xml looks like this:
<codegen>
<generate renderer="net.sf.hibernate.tool.hbm2java.BasicRenderer"/>
<generate suffix="Finder" renderer="net.sf.hibernate.tool.hbm2java.FinderRenderer"/>
</codegen>
Does anyone know how to do that, or at least, what happened with this functionality in hibernate3?
I am trying to create some statements and their inverse in Java using OpenRDF's Sesame. I am following the tutorial in the Sesame 2.7 documentation as well. Let's say I have created the following URIs and statements and added them to the Sesame repository:
RepositoryConnection connection;
ValueFactory factory = ValueFactoryImpl.getInstance();
Resource author_1 = new createURI("http://www.author.org/author_1");
Resource article_1 = new createURI("http://www.title.org/article_1");
URL wrote = factory.createURI("http://www.paper.org/wrote");
URL writtenby = factory.createURI("http://www.paper.org/writtenby");
Statement statement_1 = factory.createStatement(author_1, wrote, article_1);
connection.add(statement_1);
The code above is for creating a statement to describe that an author wrote an article. In the OpenRDF Workbench, I can see this statement. What I am trying to do is to do the inverse using OWL.INVERSEOF to get that article_1 is written by author_1 as follows:
connection.add(writtenby, OWL.INVERSEOF, wrote);
When I run the project and get back to the OpenRDF Workbench, I see the following statements:
<http://www.author.org/author_1>, http://www.paper.org/wrote, http://www.title.org/article_1>
<http://www.paper.org/writtenby>, <http://www.w3.org/2002/owl#inverseOf>, <http://www.paper.org/wrote>
When I click on <http://www.paper.org/writtenby>, I can't find the inverse statement that the article_1 is written by author1 but I can find the author_1 wrote article_1. Is my way of doing this inverse wrong or do I misunderstand something with this concept? Thank you very much for your help in advance.
It is as Joshua says. OpenRDF/Sesame doesn't have support for this kind of reasoning. I think it supports only some kind of basic RDF/S reasoning during load. It also (still) doesn't support custom rules I think.
You can achieve what you are asking by using Sesame with OWLIM. OWLIM-Lite and OWLIM-SE do have support for OWL reasoning (rule-based, forward-chaining, materialization). There is a number of predefined rule sets supported by OWLIM. You would probably want owl-max.
Depending on what you are trying to achieve, you might want to use a real OWL reasoner such as Pellet. However, Sesame doesn't support Pellet...
Update: The link in the answer is both interesting and useful, but unfortunately does not address the need for a java API, so I am still looking forward to any input.
I'm building a database of chemical compounds. I need all the synonyms (IUPAC and common names) as well as safety data for each.
I'll be using the freely available data at PubChem (http://pubchem.ncbi.nlm.nih.gov/)
There's an easy way of querying each compound with simple HTTP gets. For example, to obtain glycerol data, the URL is:
http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?cid=753
And the following URL would return an easy to parse format:
http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?cid=753&disopt=DisplaySDF
but it will respond only very basic info, lacking safety data and only a few common names.
There is one public domain API for JAVA that seems a very complete, developed by a group at Scripps (citation). The code is here.
Unfortunately, this API is not very well documented and it's quite difficult to follow due to the complexity of the data involved.
For what I gathered, pubchemdb is using the PubChem Power User Gateway (PUG) XML API
Has anyone used this API (or any other one available)? I would appreciate a short description or tutorial on how to start with it.
The Cactvs Chemoinformatics toolkit (free for academic/educational use) has full PubChem integration. Using the scripting environment, you can easily do something like
cactvs>ens create 753
ens0
cactvs>ens get ens0 E_NAMESET
PROPANE-1,2,3-TRIOL GLYCEROL 8043-29-6 29796-42-7 30049-52-6 37228-54-9 75398-78-6 78630-16-7 8013-25-0 175385-78-1 25618-55-7 64333-26-2 56-81-5 {Tegin M} LS-1377 G8773_SIGMA 15523_RIEDEL {Glycerin, natural} NCGC00090950-03 191612_ALDRICH 15524_RIEDEL {Glycerol solution} L-glycerol 49767_FLUKA {Biodiesel impurity} 49770_FLUKA 49771_FLUKA NCGC00090950-01 49927_FLUKA Glycerol-Gelatine G7757_SIAL GOL D-glycerol G9012_SIAL {Polyhydric alcohols} c0066 MOON {NSC 9230} G2025_SIGMA ZINC00895048 49781_FLUKA {Concentrated glycerin} {Concentrated glycerin (JP15)} D00028 {Glycerin (JP15/USP)} 44892U_SUPELCO {Glycerin, concentrated (JAN)} CRY 49782_FLUKA NCGC00090950-02 G6279_SIAL W252506_ALDRICH G7893_SIAL {Glycerin, concentrated} 33224_RIEDEL Bulbold Cristal Glyceol G9281_SIGMA Glycerol-1,2,3-3H G1901_SIGMA G7043_SIGMA 1,2,3-trihydroxypropane 1,2,3-trihydroxypropanol glycerin G2289_SIAL G9406_SIGMA {Glycerol-[2-3H]} CHEBI:17754 Glyzerin Oelsuess InChI=1/C3H8O3/c4-1-3(6)2-5/h3-6H,1-2H {90 Technical glycerine} Dagralax {Glycerin, anhydrous} {Glycerin, synthetic} Glycerine Glyceritol {Glycyl alcohol} Glyrol Glysanin NSC9230 Ophthalgan Osmoglyn Propanetriol {Synthetic glycerin} {Synthetic glycerine} Trihydroxypropane Vitrosupos {WLN: Q1YQ1Q} Glycerol-1,3-14C {4-01-00-02751 (Beilstein Handbook Reference)} AI3-00091 {BRN 0635685} {CCRIS 2295} {Caswell No. 469} {Citifluor AF 2} {Clyzerin, wasserfrei [German]} {EINECS 200-289-5} {EPA Pesticide Chemical Code 063507} {FEMA No. 2525} {Glicerina [DCIT]} {Glicerol [INN-Spanish]} {Glycerin (mist)} {Glycerin [JAN]} {Glycerin mist} {Glycerine mist} Glycerinum {Glycerolum [INN-Latin]} Grocolene {HSDB 492} IFP {Incorporation factor} 1,2,3-Propanetriol C00116 Optim {Propanetriol (VAN)} {1,2,3-PROPANETRIOL, HOMOPOLYMER} {Glycerol polymer} {Glycerol, polymers} {HL 80} {PGL 300} {PGL 500} {PGL 700} Polyglycerin Polyglycerine Polyglycerol {Unigly G 2} {Unigly G 6} G5516_SIGMA MolMap_000024
cactvs>
This hides all PUG ugliness - but in any case, I dare say that PUG is well documented. The toolkit goes much beyond simple data downloads - you can even open and query PubChem like a local SD file if you want to.
PubChem does not contain safety data, though. And safety data is country/region-dependent, strictly regulated, and you should be really careful not to be hit with liabilities. Have your approach checked by legal personnel!