Given a directory, my application traverses and loads .mdb MS Access dbs using the Jackcess API. Inside of each database, there is a table named GCMT_CMT_PROPERTIES with a column named cmt_data containing some text. I also have a Mapper object (which essentially resembles a Map<String,String> but allows duplicate keys) which I use as a dictionary when replacing a certain word from a string.
So for example if mapper contains fox -> dog then the sentence: "The fox jumps" becomes "The dog jumps".
The design I'm going with for this program is as follows:
1. Given a directory, traverse all subdirectories and load all .mdb files into a File[].
2. For each db file in File[], create a Task<Void> called "TaskMdbUpdater" and pass it the db file.
3. Dispatch and run each task as it is created (see 2. above).
TaskMdbUpdater is responsible for locating the appropriate table and column in the db file it was given and iteratively running a "find & replace" routine on each row of the table to detect words from the dictionary and replace them (as shown in example above) and finally updating that row before closing the db. Each instance of TaskMdbUpdater is a background thread with a Jackcess API DatabaseBuilder assigned to it, so it is able to manipulate the db.
In the current state, the code is running without throwing any exceptions whatsoever, however when I "manually" open the db through Access and inspect a given row, it appears to not have changed. I've tried to pin the source of the issue without any luck and would appreciate any support. If you need to see more code, let me know and I'll update my question accordingly.
public class TaskDatabaseTaskDispatcher extends Task<Void> {
private String parentDir;
private String dbFileFormat;
private Mapper mapper;
public TaskDatabaseTaskDispatcher(String parent, String dbFileFormat, Mapper mapper) {
this.parentDir = parent;
this.dbFileFormat = dbFileFormat;
this.mapper = mapper;
}
#Override
protected Void call() throws Exception {
File[] childDirs = getOnlyDirectories(getDirectoryChildFiles(new File(this.parentDir)));
DatabaseBuilder[] dbs = loadDatabasesInParent(childDirs);
Controller.dprint("TaskDatabaseTaskDispatcher", dbs.length + " databases were found in parent directory");
TaskMdbUpdater[] tasks = new TaskMdbUpdater[dbs.length];
Thread[] workers = new Thread[dbs.length];
for(int i=0; i<dbs.length; i++) {
// for each db, dispatch Task so a worker can update that db.
tasks[i] = new TaskMdbUpdater(dbs[i], mapper);
workers[i] = new Thread(tasks[i]);
workers[i].setDaemon(true);
workers[i].start();
}
return null;
}
private DatabaseBuilder[] loadDatabasesInParent(File[] childDirs) throws IOException {
DatabaseBuilder[] dbs = new DatabaseBuilder[childDirs.length];
// Traverse children and load dbs[]
for(int i=0; i<childDirs.length; i++) {
File dbFile = FileUtils.getFileInDirectory(
childDirs[i].getCanonicalFile(),
childDirs[i].getName() + this.dbFileFormat);
dbs[i] = new DatabaseBuilder(dbFile);
}
return dbs;
}
}
// StringUtils class, utility methods
public class StringUtils {
public static String findAndReplace(String str, Mapper mapper) {
String updatedStr = str;
for(int i=0; i<mapper.getMappings().size(); i++) {
updatedStr = updatedStr.replaceAll(mapper.getMappings().get(i).getKey(), mapper.getMappings().get(i).getValue());
}
return updatedStr;
}
}
// FileUtils class, utility methods:
public class FileUtils {
/**
* Returns only directories in given File[].
* #param list
* #return
*/
public static File[] getOnlyDirectories(File[] list) throws IOException, NullPointerException {
List<File> filteredList = new ArrayList<>();
for(int i=0; i<list.length; i++) {
if(list[i].isDirectory()) {
filteredList.add(list[i]);
}
}
File[] correctSizeFilteredList = new File[filteredList.size()];
for(int i=0; i<filteredList.size(); i++) {
correctSizeFilteredList[i] = filteredList.get(i);
}
return correctSizeFilteredList;
}
/**
* Returns a File[] containing all children under specified parent file.
* #param parent
* #return
*/
public static File[] getDirectoryChildFiles(File parent) {
return parent.listFiles();
}
}
public class Mapper {
private List<aMap> mappings;
public Mapper(List<aMap> mappings) {
this.mappings = mappings;
}
/**
* Returns mapping dictionary, typically used for extracting individual mappings.
* #return List of type aMap
*/
public List<aMap> getMappings() {
return mappings;
}
public void setMappings(List<aMap> mappings) {
this.mappings = mappings;
}
}
/**
* Represents a single String based K -> V mapping.
*/
public class aMap {
private String[] mapping; // [0] - key, [1] - value
public aMap(String[] mapping) {
this.mapping = mapping;
}
public String getKey() {
return mapping[0];
}
public String getValue() {
return mapping[1];
}
public String[] getMapping() {
return mapping;
}
public void setMapping(String[] mapping) {
this.mapping = mapping;
}
}
Update 1:
To verify my custom StringUtils.findAndReplace logic, I've performed the following unit test (in JUnit) which is passing:
#Test
public void simpleReplacementTest() {
// Construct a test mapper/dictionary
List<aMap> aMaps = new ArrayList<aMap>();
aMaps.add(new aMap(new String[] {"fox", "dog"})); // {K, V} = K -> V
Mapper mapper = new Mapper(aMaps);
// Perform replacement
String corpus = "The fox jumps";
String updatedCorpus = StringUtils.findAndReplace(corpus, mapper);
assertEquals("The dog jumps", updatedCorpus);
}
I'm including my TaskMdbUpdater class here separately with some logging code included, as I suspect point of failure lies somewhere in call:
/**
* Updates a given .mdb database according to specifications defined internally.
* #since 2.2
*/
public class TaskMdbUpdater extends Task<Void> {
private final String TABLE_NAME = "GCMT_CMT_PROPERTIES";
private final String COLUMN_NAME = "cmt_data";
private DatabaseBuilder dbPackage;
private Mapper mapper;
public TaskMdbUpdater(DatabaseBuilder dbPack, Mapper mapper) {
super();
this.dbPackage = dbPack;
this.mapper = mapper;
}
#Override
protected Void call() {
try {
// Controller.dprint("TaskMdbUpdater", "Worker: " + Thread.currentThread().getName() + " running");
// Open db and extract Table
Database db = this.dbPackage
.open();
Logger.debug("Opened database: {}", db.getFile().getName());
Table table = db.getTable(TABLE_NAME);
Logger.debug("Opening table: {}", table.getName());
Iterator<Row> tableRows = table.iterator();
// Controller.dprint("TaskMdbUpdater", "Updating database: " + db.getFile().getName());
int i=0;
try {
while( tableRows.hasNext() ) {
// Row is basically a<code> Map<Column_Name, Value> </code>
Row cRow = tableRows.next();
Logger.trace("Current row: {}", cRow);
// Controller.dprint(Thread.currentThread().getName(), "Database name: " + db.getFile().getName());
// Controller.dprint("TaskMdbUpdater", "existing row: " + cRow.toString());
String str = cRow.getString(COLUMN_NAME);
Logger.trace("Row {} column field contents (before find/replace): {}", i, str);
String newStr = performFindAndReplaceOnString(str);
Logger.trace("Row {} column field contents (after find/replace): {}", i, newStr);
cRow.put(COLUMN_NAME, newStr);
Logger.debug("Updating field in row {}", i);
Row newRow = table.updateRow(cRow); // <code>updateRow</code> returns the new, updated row. Ignoring this.
Logger.debug("Calling updateRow on table with modified row");
// Controller.dprint("TaskMdbUpdater", "new row: " + newRow.toString());
i++;
Logger.trace("i = {}", i);
}
} catch(NoSuchElementException e) {
// e.printStackTrace();
Logger.error("Thread has iterated past number of rows in table", e);
}
Logger.info("Iterated through {} rows in table {}", i, table.getName());
db.close();
Logger.debug("Closing database: {}", db.getFile().getName());
} catch (Exception e) {
// e.printStackTrace();
Logger.error("An error occurred while attempting to update row value", e);
}
return null;
}
/**
* #see javafx.concurrent.Task#failed()
*/
#Override
protected void failed() {
super.failed();
Logger.error("Task failed");
}
#Override
protected void succeeded() {
Logger.debug("Task succeeded");
}
private String performFindAndReplaceOnString(String str) {
// Logger.trace("OLD: [" + str + "]");
String updatedStr = null;
for(int i=0; i<mapper.getMappings().size(); i++) {
// loop through all parameter names in mapper to search for in str.
updatedStr = findAndReplace(str, this.mapper);
}
// Logger.trace("NEW: [" + updatedStr + "]");
return updatedStr;
}
}
Here's a small exerept from my log. As you can see, it doesn't seem to do anything after opening the table which has left me a bit perplexed:
INFO (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Located the following directories under specified MOIS parent which contains an .mdb file:
[01_Parent_All_Safe_Test[ RV_DMS_0041RV_DMS_0001RV_DMS_0003RV_DMS_0005RV_DMS_0007RV_DMS_0012RV_DMS_0013RV_DMS_0014RV_DMS_0016RV_DMS_0017RV_DMS_0018RV_DMS_0020RV_DMS_0023RV_DMS_0025RV_DMS_0028RV_DMS_0029RV_DMS_0031RV_DMS_0033RV_DMS_0034RV_DMS_0035RV_DMS_0036RV_DMS_0038RV_DMS_0039RV_DMS_0040 ]]
...
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Created new task: NAMEMAP.logic.TaskMdbUpdater#4cfe46fe
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Created new worker: Thread[Thread-22,5,main]
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Set worker Thread[Thread-22,5,main] as daemon
DEBUG (16-02-2017 17:27:59) [Thread-9] NAMEMAP.logic.TaskDatabaseTaskDispatcher.call(): Dispatching worker: Thread[Thread-22,5,main]
...
DEBUG (16-02-2017 17:28:00) [Thread-22] NAMEMAP.logic.TaskMdbUpdater.call(): Opened database: RV_DMS_0023.mdb
DEBUG (16-02-2017 17:28:00) [Thread-22] NAMEMAP.logic.TaskMdbUpdater.call(): Opening table: GCMT_CMT_PROPERTIES
After this point, there isn't any more entries entries in the log and the processor spikes at 100% load, remaining that way until I force kill the application. This could mean the program gets stuck in an infinite while loop - however if that were to be the case then shouldn't there be log entries in the file?
Update 2
Okay I've further narrowed the problem by printing log TRACE into stdio. It seems that my performFindAndReplaceOnString is super inefficient and it never gets past the first row of these dbs because it's just grinding away at the long string. Any suggestions on how I can efficiently perform a string replacement for this use case?
Related
Before anything, the title doesn't convey what I really want to ask.
What I want to know is, how can I make a map, where for several users, it collects their Data and then groups it all together. I'm currently using two lists, one for the users' names and another for their works. I tried using a map.put but it kept overwriting the previous entry. So what I'd like to obtain is as follows;
Desired output:
{user1 = work1, work2, work3 , user2 = work1, work2 , userN = workN}
Current output:
{[user1, user2, user3, user4]=[work1, work2, work3, work4, work5 (user1) , work1 (user2), work1, work2, work3 ( user3 )]}
This is the code that I'm currently using to achieve the above.
private static Map<List<String>, List<String>> repositoriesUserData = new HashMap<>();
private static Set<String> collaboratorNames = new HashSet<>();
public static void main(String[] args) throws Exception {
login();
getCollabs(GITHUB_REPO_NAME);
repositoriesUnderUser();
}
public GitManager(String AUTH, String USERNAME, String REPO_NAME) throws IOException {
this.GITHUB_LOGIN = USERNAME;
this.GITHUB_OAUTH = AUTH;
this.GITHUB_REPO_NAME = REPO_NAME;
this.githubLogin = new GitHubBuilder().withOAuthToken(this.GITHUB_OAUTH, this.GITHUB_LOGIN).build();
this.userOfLogin = this.githubLogin.getUser(GITHUB_LOGIN);
}
public static void login() throws IOException {
new GitManager(GIT_TOKEN, GIT_LOGIN, GITHUB_REPO_NAME);
connect();
}
public static void connect() throws IOException {
if (githubLogin.isCredentialValid()) {
valid = true;
githubLogin.connect(GITHUB_LOGIN, GITHUB_OAUTH);
userOfLogin = githubLogin.getUser(GITHUB_LOGIN);
}
}
public static String getCollabs(String repositoryName) throws IOException {
GHRepository collaboratorsRepository = userOfLogin.getRepository(repositoryName);
collaboratorNames = collaboratorsRepository.getCollaboratorNames();
String collaborators = collaboratorNames.toString();
System.out.println("Collaborators for the following Repository: " + repositoryName + "\nAre: " + collaborators);
String out = "Collaborators for the following Repository: " + repositoryName + "\nAre: " + collaborators;
return out;
}
public static List<String> fillList() {
List<String> collaborators = new ArrayList<>();
collaboratorNames.forEach(s -> {
collaborators.add(s);
});
return collaborators;
}
public static String repositoriesUnderUser() throws IOException {
GHUser user;
List<String> names = new ArrayList<>();
List<String> repoNames = new ArrayList<>();
for (int i = 0; i < fillList().size(); i++) {
user = githubLogin.getUser(fillList().get(i));
Map<String, GHRepository> temp = user.getRepositories();
names.add(user.getLogin());
temp.forEach((c, b) -> {
repoNames.add(b.getName());
});
}
repositoriesUserData.put(names,repoNames);
System.out.println(repositoriesUserData);
return "temporaryReturn";
}
All help is appreciated!
I'll give it a try (code in question still not working for me):
If I understood correctly, you want a Map, that contains the repositories for each user.
So therefore i think the repositoriesUserData should be a Map<String, List<String>.
With that in mind, lets fill the map in each loop-cycle with the user from the lists as key and the list of repository-names as value.
The method would look like this (removed the temporary return and replaced it with void)
public static String repositoriesUnderUser() throws IOException {
for (int i = 0; i < fillList().size(); i++) {
GHUser user = githubLogin.getUser(fillList().get(i));
Map<String, GHRepository> temp = user.getRepositories();
repositoriesUserData.put(user.getLogin(), temp.values().stream().map(GHRepository::getName).collect(Collectors.toList()));
}
return "temporaryReturn";
}
Edit: (Short explanation what is happening in your code)
You are collecting all usernames to the local List names and also adding all repository-names to the local List 'repoNames'.
At the end of the method you put a new entry to your map repositoriesUserData.
That means at the end of the method you just added one single entry to the map where
key = all of the users
value = all of the repositories from the users (because its a list, if two users have the same repository, they are added twice to this list)
PLEASE READ THE EDIT BELOW
I am trying to create a file archive program that receives a POST request with a pdf file in it's requestBody embedded and then save that pdf file directly into a Database.
But I receive a "unsupported Media Type" and "Content type 'application/pdf;charset=UTF-8' not supported" error message on Postman, as well as this in Eclipse when sending the request:
2020-05-25 11:20:58.551 WARN 3944 --- [nio-8080-exec-2] .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.web.HttpMediaTypeNotSupportedException: Content type 'application/pdf;charset=UTF-8' not supported]
below is my current code. I've looked up multiple similar questions on here, but to no avail.
I am using Postman to send out my POST requests.
Here is the code of my Controller Class.
private final DocumentRepository docRepository;
private final DocumentService documentService;
/*#Autowired
public Controller(DocumentRepository docRepository, FileStorage fileStorage, DocumentService documentService) {
this.docRepository = docRepository;
this.fileStorage = fileStorage;
this.doc
}*/
// Just for testing purposes
#GetMapping("/files")
public List<Document> files() {
return docRepository.findAll();
}
// POST Method. Add a new entry to the Database.
// WIP. Works, but creates 2 new columns "file_name" and "size_in_bytes" with the values
// instead of inserting them under the actual made "fileName" and "sizeInBytes" columns
#PostMapping(value="/document", consumes=MediaType.APPLICATION_PDF_VALUE)
public Optional<Document> postDocument(#RequestHeader(value="fileName") String fileName, #RequestBody MultipartHttpServletRequest file
) {
Optional<Document> result = documentService.postToDB(fileName, file);
return result;
}
}
And then the code for the Service Class that handles the logic.
#Service
public class DocumentService {
public static String directory = System.getProperty("user.dir")+ "/test_uploads";
private final DocumentRepository docRepository;
public DocumentService(DocumentRepository docRepository) {
this.docRepository = docRepository;
}
public Optional<Document> postToDB(String fileName, MultipartHttpServletRequest file) {
// temporary counter to see if there already is another entry with the same name
// temporary id to compare all the ids and iterate the one with the biggest value.
int counter=0;
int temp_id=0;
Timestamp current_timestamp = new Timestamp(Calendar.getInstance().getTime().getTime());
// This loop is supposed to look if there already is an entry with the same file name, if yes, then the counter will get iterated by 1.
// Bugged. It will still add a new entry that has a identical file name with another entry as of right now.
for(int i = 0; i < docRepository.count(); i++) {
if(docRepository.findAll().get(i).getFileName() == fileName) {
counter = counter + 1;
}
}
// This checks if the counter is less than one, thus asking if there are 1 or more file name duplicates
if (counter < 1) {
// This gets every ID of every entry and saves them on the temporary variable
for(int i = 0; i < docRepository.count(); i++) {
temp_id= docRepository.findAll().get(i).getId();
// It is then checked if the next entry is bigger than the previously saved value, if yes, then the variable gets overwritten
// with the bigger ID until the biggest ID stands.
if(docRepository.findAll().get(i).getId() > temp_id) {
temp_id = docRepository.findAll().get(i).getId();
}
}
// after the for-loop closes, the new File will get added with the biggest ID +1 and the fileName in the Header of the POST request.
Optional<Document> service_result;
try {
service_result = DataAccess.saveEntry(fileName,file, temp_id, docRepository, current_timestamp, 1 );
return service_result;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
else {
return Optional.empty();
}
}
}
and then finally, the DataAcces Class that saves the requests.
public class DataAccess {
public static Optional<Document> saveEntry(String fileName, MultipartHttpServletRequest file, Integer temp_id, DocumentRepository docRepository, Timestamp updated, Integer version) throws IOException {
docRepository.save(new Document(temp_id +1,fileName+ ".pdf",file.getFile(fileName).getBytes(), file.getFile(fileName).getSize(),updated, version));
return docRepository.findById(temp_id +1);
}
}
EDIT 1: I changed consumes to form-data and I send the POST request in Postman with a custom content-type header with application/form-data.
#PostMapping(value="/document", consumes="multipart/form-data")
public Optional<Document> postDocument(#RequestHeader(value="fileName") String fileName, #RequestBody MultipartHttpServletRequest file
) {
Optional<Document> result = documentService.postToDB(fileName, file);
return result;
}
Now it seems to work, but I now get a new exception:
org.apache.tomcat.util.http.fileupload.FileUploadException: the request was rejected because no multipart boundary was found
I can easily query the Alfresco audit log in REST using this query:
http://localhost:8080/alfresco/service/api/audit/query/audit-custom?verbose=true
But how to perform the same request in Java within Alfresco module?
It must be synchronous.
A lazy solution would be to call the REST URL in Java, but it would probably be inefficient, and more importantly it would require me to store an admin's password somewhere.
I noticed AuditService has a auditQuery method so I am trying to call it. Unfortunately it seems to be for asynchronous operations? I don't need callbacks, as I need to wait until the queried data is ready before going on to the next step.
Here is my implementation, mostly copied from the source code of the REST API:
int maxResults = 10000;
if (!auditService.isAuditEnabled(AUDIT_APPLICATION, ("/" + AUDIT_APPLICATION))) {
throw new WebScriptException(
"Auditing for " + AUDIT_APPLICATION + " is disabled!");
}
final List<Map<String, Object>> entries =
new ArrayList<Map<String,Object>>(limit);
AuditQueryCallback callback = new AuditQueryCallback() {
#Override
public boolean valuesRequired() {
return true; // true = verbose
}
#Override
public boolean handleAuditEntryError(
Long entryId, String errorMsg, Throwable error) {
return true;
}
#Override
public boolean handleAuditEntry(
Long entryId,
String applicationName,
String user,
long time,
Map<String, Serializable> values) {
// Convert values to Strings
Map<String, String> valueStrings =
new HashMap<String, String>(values.size() * 2);
for (Map.Entry<String, Serializable> mapEntry : values.entrySet()) {
String key = mapEntry.getKey();
Serializable value = mapEntry.getValue();
try {
String valueString = DefaultTypeConverter.INSTANCE.convert(
String.class, value);
valueStrings.put(key, valueString);
}
catch (TypeConversionException e) {
// Use the toString()
valueStrings.put(key, value.toString());
}
}
entry.put(JSON_KEY_ENTRY_VALUES, valueStrings);
}
entries.add(entry);
return true;
}
};
AuditQueryParameters params = new AuditQueryParameters();
params.setApplicationName(AUDIT_APPLICATION);
params.setForward(true);
auditService.auditQuery(callback, params, maxResults);
Though the callback might it look asynchronous, it is not.
I try to fetch record from database using hibernate's scrollable result and with reference from this github project, i tried to send each record as chunk response.
Controller:
#Transactional(readOnly=true)
public Result fetchAll() {
try {
final Iterator<String> sourceIterator = Summary.fetchAll();
response().setHeader("Content-disposition", "attachment; filename=Summary.csv");
Source<String, ?> s = Source.from(() -> sourceIterator);
return ok().chunked(s.via(Flow.of(String.class).map(i -> ByteString.fromString(i+"\n")))).as(Http.MimeTypes.TEXT);
} catch (Exception e) {
return badRequest(e.getMessage());
}
}
Service:
public static Iterator<String> fetchAll() {
StatelessSession session = ((Session) JPA.em().getDelegate()).getSessionFactory().openStatelessSession();
org.hibernate.Query query = session.createQuery("select l.id from Summary l")
.setFetchSize(Integer.MIN_VALUE).setCacheable(false).setReadOnly(true);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
return new models.ScrollableResultIterator<>(results, String.class);
}
Iterator:
public class ScrollableResultIterator<T> implements Iterator<T> {
private final ScrollableResults results;
private final Class<T> type;
public ScrollableResultIterator(ScrollableResults results, Class<T> type) {
this.results = results;
this.type = type;
}
#Override
public boolean hasNext() {
return results.next();
}
#Override
public T next() {
return type.cast(results.get(0));
}
}
For test purpose, i am having 1007 records in my table, whenever i call this end point, it always return only 503 records.
Enabled AKKA log level to DEBUG and tried it again, it logs the following line for 1007 times
2016-07-25 19:55:38 +0530 [DEBUG] from org.hibernate.loader.Loader in application-akka.actor.default-dispatcher-73 - Result row: From the log i confirm that it fetching all, but couldn't get where the remaining one got left.
I run the same query in my workbench and i export it to a file locally and compared it with the file generated by the end point, kept LHS record generated from end point and RHS file exported from Workbench.
First row matches, second and third didn't match. After that it got matches for alternate records until the end.
Please correct me, if am doing anything wrong and suggest me is this the correct approach for generating CSV for large db records.
For the sake of testing, i removed the CSV conversion logic in my above snippet.
// Controller code
// Prepare a chunked text stream
ExportAsChuncked eac = new ExportAsChuncked();
response().setHeader("Content-disposition","attachment; filename=results.csv");
Chunks<String> chunks = new StringChunks() {
// Called when the stream is ready
public void onReady(Chunks.Out<String> out) {
try {
eac.exportData(scrollableIterator, out);
}catch (XOException e) {
Logger.error(ERROR_WHILE_DOWNLOADING_RESPONSE, e);
}
out.close();
}
};
// Serves this stream with 200 OK
return ok(chunks);
// Export as chunk logic
class ExportAsChuncked {
void exportData(Iterator<String> data, Chunks.Out<String> out) {
while(data.hasNext()) {
out.write(data.next());
}
}
}
I want to reset the Database AND sequences after each test in Java+DBUnit/.
I've seen this question but doesn't have the code solution I am struggling to get.
How to use Oracle Sequence Numbers in DBUnit?
I've found the answer, it was in the Official Documentation. It was as easy as in the dataset you are using to prepare the database, add a reset_sequences attribute with a list of the ones you want to reset.
<?xml version='1.0' encoding='UTF-8'?>
<dataset reset_sequences="emp_seq, dept_seq">
<emp empno="1" ename="Scott" deptno="10" job="project manager" />
....
</dataset>
This solution is not working perfectly, as it didn't really reset the sequence, only simulates the reset on the inserted rows. If you want to effectively reset it, you should execute some commands. I've extended the DatabaseOperation for that purpose with this class.
public static final DatabaseOperation SEQUENCE_RESETTER_POSTGRES = new DatabaseOperation() {
#Override
public void execute(IDatabaseConnection connection, IDataSet dataSet)
throws DatabaseUnitException, SQLException {
String[] tables = dataSet.getTableNames();
Statement statement = connection.getConnection().createStatement();
for (String table : tables) {
int startWith = dataSet.getTable(table).getRowCount() + 1;
statement.execute("alter sequence " + table + "_PK_SEQ RESTART WITH "+ startWith);
}
}
};
public static final DatabaseOperation SEQUENCE_RESETTER_ORACLE = new DatabaseOperation() {
#Override
public void execute(IDatabaseConnection connection, IDataSet dataSet)
throws DatabaseUnitException, SQLException {
String[] tables = dataSet.getTableNames();
Statement statement = connection.getConnection().createStatement();
for (String table : tables) {
int startWith = dataSet.getTable(table).getRowCount() + 1;
statement.execute("drop sequence " + table + "_PK_SEQ if exists");
statement.execute("create sequence " + table + "_PK_SEQ START WITH " + startWith);
}
}
};
I've tested the solution provided by #Chexpir, and here is an improved/cleaner way (PostgreSQL implementation) - Also note that the sequence is reset to 1 (instead of retrieving the row count)
public class ResetSequenceOperationDecorator extends DatabaseOperation {
private DatabaseOperation decoree;
public ResetSequenceOperationDecorator(DatabaseOperation decoree) {
this.decoree = decoree;
}
#Override
public void execute(IDatabaseConnection connection, IDataSet dataSet) throws DatabaseUnitException, SQLException {
String[] tables = dataSet.getTableNames();
Statement statement = connection.getConnection().createStatement();
for (String table : tables) {
try {
statement.execute("ALTER SEQUENCE " + table + "_id_seq RESTART WITH 1");
}
// Don't care because the sequence does not appear to exist (but catch it silently)
catch(SQLException ex) {
}
}
decoree.execute(connection, dataSet);
}
}
And in your DatabaseTestCase:
public abstract class AbstractDBTestCase extends DataSourceBasedDBTestCase {
#Override
protected DatabaseOperation getTearDownOperation() throws Exception {
return new ResetSequenceOperationDecorator(DatabaseOperation.DELETE_ALL);
}
}
Can you please check the below link if anyway it helps you.
How to revert the database back to the initial state using dbUnit?